I'm furious about this data breach. I think our laws need to be updated to make sloppy security an existential threat to businesses. Optus should be fined by the australian government within an inch of their life. Its not ok to profit from sloppy security work then leave regular people to pick up the tab when it goes wrong.
And we need to put other companies with terrible security on notice. I think the only way big companies will move is by making their executive team sweat money.
Thats how it works everywhere else in the economy - if your negligence causes harm, you're liable. Serve bad food in a restaurant? Sued. Sell sporting equipment which causes injury? Sued. Misrepresent yourself? Sued, and potential criminal charges. Medical malpractice? Sued. But somehow, if your sloppy software causes harm thats ok? What rubbish. Security malpractice should bear the same punishment as everything else.
Maybe the price of paid software will go up. Thats fine. Maybe there aren't enough qualified security engineers. Also, fine.
If you don't have the expertise to manufacture a safe car, we've decided you can't enter the car business at all. Likewise, if you don't have the technical skill to keep my data secure, you have no business storing my data at all.
Optus was already subject to a enforceable undertaking for a previous breach. They were supposed to uplift their security, but they did the bare minimum required by the judge. The bare minimum in this case was to apply automatic patches to the production systems only. They ran the update tools on end-of-life products -- which does nothing -- and then marked them as done. Non-production systems were specifically excluded from updates because they weren't mentioned in the court order.
This breach happened through a non-production system. Shock. Surprise.
The Australian version of the NSA, ASD, produces very good information security guides for network design and security hardening. Obviously not read by the Optus CEO (Which is kinda insane, when did a telco stop being a tech company?).
Anyway, a key point is not to have a network design that's like a Poland where people can just drive across it with little effort. That goes for developers and hackers. It should be organizationally hard for a developer to request to connect a test application to a production system. Change control should automatically put that as a ticket for review and the network team should also be cagey about doing it. It was a big oversight.
The mixing of personal data back into a insecure test system is also a bit iffy.
If you think about classification for government secrets, sometimes a large volume of low level data will attract a high protection. Because it's loss represents significant harm.
Optus should have seen it's database of client secrets as a top secret asset and guarded it as such. The release of this information is having profound impacts on many governments, and government systems as well as the individuals (50% of Australia almost).
I think we are still too accepting of non technically literate CEO's. They don't have to know how to solve all the issues but they should know to be very very curious about their IT systems. And this is a telco!
The culture of many orgs that we would naively think of as "IT shops" is often wildly different. Telcos especially are a weird bunch because of their history and the way they run projects.
Imagine you're a project manager at a telco, and you are given a project to make a network link from a capital city to a nearby town. You hire some contractors to dig up the dirt, lay down the fibre, put the dirt back in, and then you are done. The project is finished! The budget is spent, and then there is no more money, and nothing to spend it on anyway.
Fibre does not need security patches. Fibre does not need monthly updates. You can simply forget about it, because it doesn't have an end-of-support date. Copper will eventually corrode away and need replacing, but we're talking timescales on the order of 40, 60, 80 years or more, not 4-8 years like in the software world.
Those project managers get promoted, eventually to senior management. The whole budget starts revolving around this short-term, "done and dusted" type projects. Nobody at any senior layer of management develops an expectation of anything else. You deploy things, then you move on to the next project! MOVE. ON.
The same people manage IT and software, but this is a relatively new thing. Certainly a new scale to these organisations. I've been in telco "data centres" 20 years ago, and they were... cute. Just a big office room with maybe two dozen racks in them, the majority of which were optical switching gear.
Now? Telcos might have thousands of virtual machines running tens of thousands of distinct pieces of software. Software deployed in projects run by the same project managers that were used to laying fibre and walking away.
This kind of breach is the consequence of this corporate culture. It's not just the CEO, it's also the COO, the CFO, the CIO, and all the way down to all but the last tier of random befuddled contractors wondering why they're not allowed to touch anything that's not actively being built new.
Don't think for a second that any of the other major telcos are any different or better.
I agree that treating production vs non-production as a dichotomy can be problematic, but that doesn't mean some systems aren't more sensitive than others.
Also security is not one dimensional. A system's required level of confidentiality might be very different from its required level of availability. Being explicit about this might be better than trying to lump different requirements into a "production" label.
> There's no such thing as non-production/production systems
Strongly disagree. If it's an isolated copy of your production system with fake credentials, fake data, etc, there's no associated risk. We explicitly turn off various security checks in our nonprod environment because it makes it easier to poke around and debug issues. That would never fly in prod, for obvious reasons.
I guess the main assumption here is “if it’s an isolated copy of your production system”
If that’s truly the case, and they’re fake data, I’d generally agree. That isn’t what’s happened with Optus and I dare say with many other orgs where nonprod is generally interchangeable with less secure
> There's no such thing as non-production/production systems imho
From a security point of view I agree - certainly my current employer makes no distinction and a vulnerability is a vulnerability no matter where it is.
Many are.. they're behind some minimal security because if they're truly non-production they have a pile of fake data. I think one system I worked on had over 10k users with first name john_123 because... well my name is john and I'm lazy :)
I like the way Japan approaches data leaks. The government is able to fine a company a fixed amount per person (how much depends on the nature of the PII leaked) and are also able to prevent a company from trading for a period of time. I don't have any figures to say how well it works, but I can say that companies over here are bloody afraid of the consequences.
Anecdata, but I think for pre-paid services Optus never stored that information, and only used it for the required identification.
I had activated a pre-paid Optus service (in a store, using my drivers' licence as ID) but let it lapse a year ago, and allegedly my licence number was not in the breach.
I had a prepaid Optus number too. I let it lapse years ago but I got the same email about name, number address being leaked, but not license or passport numbers.
I never used the Optus number and I have moved since then. So maybe impact is minimal. But I'm still angry.
With respect, that's not right. I've advised Australian companies of all sizes (from one-man bands to ASX-listed companies) for decades, and they are very heavily motivated by avoiding bad press from a preventable accident, avoiding company and personal reputational damage, being fired, being sued personally (noting that eg D&O insurance almost invariably excludes fraudulent acts), suffering regulatory investigation/enforcement (including eg fitness to hold licences) and because a surprising number of people believe that it is important to do the right thing, and (contrary to popular belief on the interwebs) this doesn't change just because they are employed in a business.
In cases where there is intentional wrongdoing, existing laws already make complicit people liable both to civil action by people harmed, plus criminal or civil penalty provisions by ASIC (or the ACCC, depending on the industry and conduct). As a plaintiff, you typically join them to increase your potential pool of recoverable assets for your clients.
The same is true if there are breaches of the Australian Consumer Law, and the person has a particular level of knowledge that is below intention.
In cases of pure negligence, like this, if the negligence rises to a criminal standard, then criminal laws and penalties already apply. How and when this works has been a topic for over 50 years, since Tesco v Nattrass in the UK.
In other words, there are already very significant legal mechanisms in place, and by and large they work - and not all of them involve having executives personally liable. In any event, many already do, and this has been worked out carefully over a long period.
The whole idea of forming a business entity is to limit the liability of its principals. This is an important incentive to make people go into business without risking their home and whatnot. Let's not throw out the baby with the bathwater.
I think the limited liability is for debts to creditors and shareholders, rather than limited liability against criminal or other behaviour.
Maybe I’m wrong, but as I understand it, Australian law treats a corporation as a person and the directors are answerable if the corporation breaks the law.
The real problem (as I see it) is that a company the size of Optus can afford to defend their behaviour for so long that enforcement itself becomes a burden. The closer you get to the CEO and board, the more they will spend shareholders money defending themselves.
Companies exist to limit the liability of shareholders / investors, not principals. The idea is that you can buy equity and receive dividends if the company goes well, without standing to lose more than you put in if the company harms others through mismanagement. The company doesn't shield the directors from personal liability for that mismanagement as a matter of law, it just pays for high quality legal representation if the allegation is made.
Oh let's. Please. The 'baby' is killing our entire living world, and nothing short of removing the idiocy of limited liability is very likely to save it.
Actions without proportional consequences were never going to lead to anywhere but destruction. Folk psychology told us that was likely (however much virtual-economists feigned ignorance). Empirical reality has confirmed it.
It's also fair to shareholders of public companies who are often also getting screwed by their executives who are abusing an agency conflict of interest in order to pad their own bonuses, at expense to both the shareholders and society.
This is a straight up case of negligence, just via a different means and with different damage from usual. The only impediment to suing is the cost of doing so vs the damage suffered, which is still too high for the average person.
The usual way around this is via a class action, and there are already at least 2 being prepared that I know of. They will run and probably settle at some point. The main thing to be policed is to avoid the funders and solicitors taking too much of the proceeds, although that process is already in hand due to recent abuses.
> I think our laws need to be updated to make sloppy security an existential threat to businesses.
No. Or perhaps yes, but only in part. Our laws need to be updated to make corporate malfeasance in general an existential threat to executives and board members, as individuals. Ordinary Aussies end up in jail for unpaid parking fines. Centrelink 'customers' (ugh) get robodebted, often into depression, sometimes to death. The "law" rules us plebs with a rod of iron, and wields it with abandon. Heads of industry get away with pretty much anything. If they want to skirt a law, all they have to do is shove fines on balance sheets. Those fines can be in the millions or billions; they're just another business expense.
Suits in jail would change the so-called 'incentives' (myopic concept though that is) dramatically. "More suits in jail" should be the catchcry of the next few generations. Every T shirt. Every graffito. Every pop song.
Of course to really fix this stuff we have to go after 'investors' and eliminate the absurdity of limited liability ("gamble on destroying the world and risk only your stake!"), but that ideology is now so culturally rigidified it would require a collapse of 'civilisation' to eliminate. That may well be coming of course.
If this company is fined a large a amount of money or goes out of business because of those fines; executives will be fine as they often have a diversified portfolio, savings, and friends. However, to make up that loss they could raise prices or fire employees.
Its not perfect, but I think levying massive fines would still be quite effective. Nobody wants to be the CEO at the helm when a successful company was ruined, or where the stock price tanked due to bad management decisions.
Imagine a car company which sold cheap cars that injured people. "If we fine them for their actions, the executive team will just raise prices or fire employees!". Yeah - maybe don't sell a car thats so cheap that it causes accidents.
Certainly it would be much more effective than the system we have now - where their negligence seems to have had no negative consequences for the company.
But if you're suggesting there should be personal liability for CEOs as a result of data breaches like this, then I think I could be convinced.
I don't think the CEO has direct knowledge of the companies security and the only method of control is to hire who they think is the best CTO. Then the same for the CTO and whomever he hires, etc. The general attitude I've seen is that you hire someone below you and put nearly 100% trust in them, they are in control. Anything else is considered just inappropriate, like micromanagement for example.
"Nobody wants to be the CEO at the helm when a successful company was ruined"
Are successful people really concerned about pride? I always thought it was money and I'm not being sarcastic. I tried to find some data to back this up but couldn't. However as an anecdotal example Henry Ford II was the CEO/chairman/president from 1945 to 1980. This includes the period where the Ford Pinto (70-80) existed and received huge amounts of negative press. It's also when imports were started to take a major toll on American car manufactures (late 70s) The only reason he retired was the mandatory retirement age at Ford.
To fix this I think we need to ignore punishments for now and focus on prevention. A government agency, funded by fees, should do yearly audits on companies that have more than X users or some other variable to confirm compliance.
And yes, Boeing has this but they were too chummy with the regulators and this and that. Even though it's not perfect it's a first step. We can fix the problems similar to what came about in the aerospace industry later.
Would this apply to government run entities? Like should the person who headed the CovidSafe app do time or cop a fine? I'd be cool with getting Mr Morrison in front of a judge for pushing that.
Honestly I trust my data with Optus way more than a company that can't build a secure MyGov app where my tax and health info are stored.
Yeah, you are talking about a death penalty for corporations (corp-death). It’s been suggested before. Thing is that probably the main reason corporations were invented is to limit liability, thus enabling them to do amazing industrial and technical feats. Making a covid vaccine for example. At the same time, it also gives you Big Tobacco. Changing this faustian bargain will have quite large effects. Corporate access to capital will become more expensive. Executives will become less bold. Corporations will become even more bureaucratic. Employees will spend a lot of time shifting responsibility around. It could be net negative. I don’t know what the solution is.
And we need to put other companies with terrible security on notice. I think the only way big companies will move is by making their executive team sweat money.
Thats how it works everywhere else in the economy - if your negligence causes harm, you're liable. Serve bad food in a restaurant? Sued. Sell sporting equipment which causes injury? Sued. Misrepresent yourself? Sued, and potential criminal charges. Medical malpractice? Sued. But somehow, if your sloppy software causes harm thats ok? What rubbish. Security malpractice should bear the same punishment as everything else.
Maybe the price of paid software will go up. Thats fine. Maybe there aren't enough qualified security engineers. Also, fine.
If you don't have the expertise to manufacture a safe car, we've decided you can't enter the car business at all. Likewise, if you don't have the technical skill to keep my data secure, you have no business storing my data at all.