Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He is right in a sense, and cases like this give him proof, but on the other hand, most people don't see the point in patching their software. They'd just keep it around unpatched, while connecting it to the network. Is millions of vulnerable devices better than giving vendors of some software the ability to remotely patch their software?


I use iOS and have App auto-updates disabled (not the system update). We are at a point where auto-updates are more risky than the security flaw itself - especially since iOS has a pretty good sandbox, especially since its impossible for one app to access the data of another. Additionally, the App usually connects to a pretty limited set of servers, and is not publicly reachable. So the attack vector is pretty small.

Another point is the often complete change in UI or app behavior and you only find out about when you want it the least. I once had the case where I came out of a bar in the middle of a cold night, tired, had some beers and just wanted to use my Bikesharing app to unlock a freefloating bike to get home - whilst the app decided that it had to introduce a completely new UI and forced me to take an unskippable "guided tour" through the new features right at the spot.


> We are at a point where auto-updates are more risky than the security flaw itself - especially since iOS has a pretty good sandbox, especially since its impossible for one app to access the data of another. Additionally, the App usually connects to a pretty limited set of servers, and is not publicly reachable. So the attack vector is pretty small.

I'd have to say that most apps now connect to a rather large number of hosts/servers, and it's getting increasingly untenable to not offer users proper control of this. I get that Apple wants to be "friendly computers", but looking at my firewall logs I'm seeing:

- third party audience segmenting - third party analytics - third party static content being fetched - third party ad networks - first or third party generic cloud server connections

I think the attack vector on apps is quite significant if you consider the app itself to have been built to monetize data - there's no outbound traffic filtering to check the system isn't leeching user data and/or device identifiers (the latter getting better and hopefully Apple will require consent soon for the ID for advertisers).

It's trivial to make an app that leeches a user's contacts regularly to a server, then does anything the developer feels like to build a social graph. See clubhouse. I fear the biggest issue for most users' privacy are the "legitimate" apps they use simply not being built with incentives aligned with their interests, and having access to phone home to any server with anything they can access.


> there's no outbound traffic filtering to check the system isn't leeching user data and/or device identifiers

But there is the iOS sandbox FS. So if an App gets exploited, it can only every leech the data from exactly THAT app. Just the same as an auto-update might just start to leech and upload that data. Given the real-world practices, I think it is more likely an App creator choses to upload the data, than some malicious hacker doing it.

> It's trivial to make an app that leeches a user's contacts regularly to a server

On iOS this is not possible - either the App requests access to the contacts list then I have to consent via iOS sandbox features, or it doesn't get access. And if I didn't give this consent, any security hole that exploits the App will need to get that consent too (at which I will not give it).


From a technical perspective, you're of course right.

I fear however that the majority of "regualar users" are being coerced into giving consent without realising what is happening - seeing the number of people end up in a FOMO-induced panic to join Clubhouse (or whatever the next big popular phone number based app is), a simple "give access to your contacts to invite a friend" masks the fact the app uploads your contacts to the server every time you open the invite tab.

It feels we need to address coercive practices or at least try to do some kind of taint analysis to allow iOS to alert that it believes the memory buffer about to go into a networking API originates from a permission-protected memory buffer, and are you sure you want to let the app upload your contacts... But I suspect we just end up shifting the problem, and they coerce users again, ad infinitum, until they harvest their social graph (illegally, at least in Europe/UK).


hopefully Apple will require consent soon for the ID for advertisers

Just think through the implications of that phrase for a moment, though. Your own device comes with a built-in mechanism specifically designed for advertisers to track you. Why was that ever a good idea in the first place?


Agreed - it really is absurd. One time I tried to design as a thought experiment a "platform" where each execution environment of the app was absolutely indistinguishable from any other.

Unfortunately to make it work you can't give it network access (easily, at least). But you have a whole host of stuff in /proc and /sys that you also need to block (at least on Android) - there's just too much unique per-device information available to apps. Clearly ensuring runtimes are indistinguishable was never a design goal (as some simple chroot'ing together a virtual filesystem would help to prevent a lot of this, as long as the APIs are limited enough).

But alas, when your phone OS comes from an adtech company, that is probably a hint they are not interested in making it indistinguishable from others.


Such mechanisms have already existed and never needed OS-level sanction. It’s pretty clear that Apple is employing the strategy of “embrace, extend, extinguish” against tracking and privacy compromising dark patterns. In other words, force developers to use a special API, then give consumers the ability to block it. The current stoush with Facebook is only the most formidable hurdle Apple has encountered so far.


That is the usual argument, but I don't see how it stands up to scrutiny.

Either there are alternative ways to track a user of an Apple device without IDFA or there are not. If there are, then it is reasonable to assume that unethical advertisers will return to using them if their access to IDFA is gated.

So, whether or not IDFA exists, the only robust way to protect users is to block apps from having access to anything about the host device that implicitly provides a unique method of identifying the user.

This is what other platforms have been trying to achieve. For example, in the web browser ecosystem, software has been restricting programmatic access to features that can be used for fingerprinting or deliberately reducing the level of detail exposed by some APIs.

With control of the entire ecosystem, why is Apple not better placed to adopt this strategy than anyone else, and whether or not Apple is technically capable of achieving the perfect result, how does introducing IDFA make any difference?


It does seem like when IDFA goes, apps will be struggling for identifers, at least on iOS. I've seen a few articles suggesting they will be back to trying to fingerprint devices (in manners that break the App Store terms of service).

I agree entirely - it seems that the solution going forwards is to prevent any access to any kind of persistent identifier that is part of the runtime environment. This might get in the way of some security mitigations (which seem pretty weak to begin with) and some monetisation models (i.e. enabling pervasive tracking across apps), but the end result feels more "clean" and like users would expect - the app runs in a sandbox where there's no access to anything to distinguish the app from any other instance of it.

Clearly keeping this up at the network level is far harder (and some app developers will probably fall back to using the WAN IP and other factors), but perhaps there are even solutions here - perhaps TCP relay servers mix user traffic (while leaving it HTTPS-protected) to prevent services from seeing user IPs, and a virtual network interface internally in the runtime ensures apps only see an IP of 10.0.0.1.

It seems a worthy goal to try to ensuer that runtime environments are indistinguishable, at least to end cross-service ad tracking once-and-for-all. Handling it within apps probably comes down to policy - not sure any technical mitigations can prevent this while apps can remain Turing complete (as they can simply store their own identifier).


> I don't see how it stands up to scrutiny.

That would be premature. Nobody is in a position to know how the "extinguish" portion of the plan will turn out because it hasn't happened yet. All we can say is that the plan looks quite robust in theory and would be a significant coup for Apple if they can pull it off.

Obviously there will always be some unethical operators, but that is true of all major platforms. Apple has the benefit of top-down control and some amount of market incentive to get it right.

> For example, in the web browser ecosystem

...there is precious little to block effective fingerprinting of 99%+ of installs and little prospect of that changing.


There's a third possibility, and I think it's Stallman's ideal computing landscape: all users care deeply about the code running on their machines and they are competent in applying and vetting patches, building from source, etc. It's unrealistic, sure, but it sounds nice right about now.


I think back when he posted it, it might have been possible for sufficiently motivated and talented individuals to do such vetting, albeit even then it would have been a stretch. Nowadays the amount of code running on various devices in a single home has increased so dramatically...

Think of TV remotes. They used to work with infrared. Nowadays, there are bluetooth remotes (not sure how widely deployed they are, but at least some vendors offer them instead of IR remotes). An infrared device can be send only. No way to hack it even if you have an infrared sender in range. The pattern transmitted was quite simple. The bluetooth protocol however requires both sending and receiving ability. Bluetooth stack is in the tens of thousands of lines range. There will be a security bug somewhere...


This TV Remote exactly clearly gets to the point: What do you think is more likely, a malicious hacker driving a van and parking in front of your house? Just to exploit the TV remote via Bluetooth, a device that has no sensitive data, is not connected to the internet and can only be used to make TV inputs like switching channels? Or rather that your TV vendor like Samsung or LG decide one day that they offer a firmware "update" that will log what you watch on the TV, upload screenshot of the device and installed App to the cloud and sell to 3rd parties? My bet is on the later, and it exactly makes the point that auto-update is more dangerous than having a security flaw in a bluetooth TV remote.


I agree it's unrealistic, but I think Stallman and many others like him would rather forego the benefits of a bluetooth remote than embrace the status quo.

OpenBSD for instance, was recently discussed on here for dropping a Bluetooth stack over concerns about the correctness of the implementation, and no one has bothered to write a better one.


I don't think it was ever Stallman's point. He is smart enough to recognize most users aren't going to be technically competent.

He's also smart enough to recognize is that most people are going to have someone technically competent in their circle of friends, or within few minutes of walking distance. So people need a set of rights that will allow them to ask or hire someone else to care for their computing. In this sense, Free Software is like Right to Repair - it isn't about making individuals technically competent; it's about enabling local markets of specialists.


Not everybody needs to do that, but then you need to rely on people you can trust. Of course we already do that to some extent in app stores: I don't install something from unknown developers that requires all sorts of permissions it shouldn't need, I do install from developers I think I can trust. But if I don't trust them, I lack the ability to inspect their code. That's indeed the big thing that's lacking.


We need a culture that distinguishes between truly necessary updates like security ones and general updates that change functionality and interfaces. One type is essential and we want to encourage everyone to install those promptly. The other should always be optional and the changes being made should always be transparent. Bundling the two is a common but user-hostile behaviour.

This separation should be the price of admission for software developers who want to use online updates, and by now there is probably a need for real laws to regulate the industry since firstly it is very clear that it will not regulate itself effectively and secondly it is no longer just random applications but essentials like operating systems, web browsers and even the software controlling your car that are being treated in this cavalier way.


This would be nice, but a developer could still publish a malicious update as an important security fix.

Also it gets very hard for developers to keep track of past versions and apply new fixes to them, when they also have to apply fixes to the new versions.


Also it gets very hard for developers to keep track of past versions and apply new fixes to them, when they also have to apply fixes to the new versions.

Then maybe they release too often?

I have been developing software professionally for a long time, much of it code that needed to be high quality. I have never worked on such a team that couldn't keep track of its own software, often over a period of years or even decades, and backport fixes when necessary.

Yes, it's less convenient for the developers than just having a single version that users are forced to update constantly if they want fixes. But it is achievable if you drop the pretence that every minor change in functionality or appearance must be pushed into production instantly through some CD system, which is of course a luxury that only those running hosted software have anyway.


And even when you choose to only manually upgrade, carefully looking at the changelog, but it just says "Bugs fixed."

The Play Store doesn't give enough information to really judge if the upgrade is necessary.


"Bug fixes and performance improvements". ~AirBnB


My Nokia 7.2 has had so many performance improvement updates I fully expect it to be faster than the latest iPhone flagship.


Maybe I'm a luddite but updates are not always necessary. It's a barcode app, what updates does it need? Is there a cve that needs to be patched? No? Then I don't need a new version


I’m usually like this. Then my bank’s app refused to launch until I updated.

They re-designed it. When I went to click my usual “schedule payment” button on a bill payment, it just said “Coming Soon”.

I wasn’t a happy person about it.

Big Canadian bank too. US$65b mkt cap.


I never use my bank app because I don't fully trust my phone but they redesigned their website to be more mobile friendly. Now I can only see 10 operations at once instead of 30 before, and I can no longer sort by amount...

When I complained 2 years ago about it my banker told me to participate in their feedback program... Now they send me market research polls about future products and features, no way to report usability issues, it's not even run by the bank itself...


Financial services companies do seem to be particularly bad when it comes to UIs for their customers. Both awful apps and broken "mobile-first" sites seem to be par for the course these days. A few do try to do better, but the reality is that most people don't change banks for much more serious reasons than this, so the banks have a financial incentive to just throw some mostly workable junk together and ship it as cheaply as possible. :-(


> Big Canadian bank too. US$65b mkt cap.

Well then, let me tell you about Toronto Dominion bank (TD, market cap ~$105B)

The app allows you to photograph a cheque to deposit from the app. This option is displayed for their TD USD chequing account.

I scanned a cheque from a US bank in the app (to deposit into my USD chequing account), only to be informed that cheques from US banks cannot be deposited using the app and that I'd have to go to a branch.

The same app is missing transactions and does not correctly display the current balance of some accounts (which are correctly shown in EasyWeb) The app has also blocked screenshots, so I was unable to provide their customer support with proof of the missing transactions.

Call me entitled, but I would expect all transactions and current account balances visible in the web interface to be accurately reflected in the bank's official app.

If you have ever experienced N26, Revolut, or any number of European "FinTech" banks, you will understand that Canadian banks are busy banging rocks together while telling you they're hot shit.


> I scanned a cheque from a US bank in the app (to deposit into my USD chequing account), only to be informed that cheques from US banks cannot be deposited using the app and that I'd have to go to a branch.

Dunno if Canadian banks would be game for this, but back when AdSense only mailed cheques in US$, and inexplicably refused to e-deposit to my US-based bank account, I’d mail my cheques in.


It’s a bank app. Keep your bank apps up to date.

Complain about updates all you want but not keeping your bank apps up to date is the wrong solution.


Generally the apk can be decompiled and the protections stripped if it really bothers you to update.


"is there a CVE" is not a question that regular people can, will, or in my opinion even should ask.

I mean, if they do, all the better, but my point is that advanced enough tech knowledge should not be a requirement for a safe system.


Better scanning in low light, better error correction in code recognition, ability to recognise codes from a further distance, faster capture of codes, more options of what to do with the resulting data, reduced power usage while scanning, better user interface choices (e.g. updating to support more devices or matching new platform UI), ability to interface with external barcode scanners, better privacy protections for the user, reduction in overall package size, etc etc etc.

There’s always more things you can do to a product to improve it for its users.


Basically all phones are behind a NAT/firewall. You can't connect to them directly.


They can connect to whatever they want, it's more than enough.


Plus many services can send push messages to the phone. E.g. Whatsapp. Bezos for example was hacked through a Whatsapp message containing an exploit.


On my home WiFi, my phone is on IPv6, and therefore not behind NAT (it is on a NAT address for IPv4, though). I've not done any super-geeky things to enable this, it's a standard router from a mainstream internet provider.

Pinging the IPv6 address from outside doesn't seem to work - I guess there is some sort of firewalling going on.


Until they turn on ADB, then it's a free for all.

https://www.bleepingcomputer.com/news/security/tens-of-thous...


> He is right in a sense, and cases like this give him proof, but on the other hand, most people don't see the point in patching their software.

We are not talking about patching. We are talking about updating.

> They'd just keep it around unpatched, while connecting it to the network. Is millions of vulnerable devices better than giving vendors of some software the ability to remotely patch their software?

Yes. Vendors do not patch their SW. For the average SW developer fixing bugs is like castor oil. Remember the forced transition from Win 7 to Win 10 when a good OS was replaced by an abomination ? And no, 10 is not better securitywise than 7. There are lot of RCEs in 10. Did you ever play an EA game ? With Origin doing a 4GB update before playing ? On a 25 Mbps internet connection ?

So for me if you have a security patch for your sw i will apply it. Maybe after some buffer period in the case of known offenders (MS) depending on severity. If it's "performance and usability improvements" just forgetit. If you did't bother to write a changelog for your SW i will not waste my time and money (an internet connection is not free ) updating it.


What stops them bundling something malicious into the “security patch” and then not writing it into the change log?


Traditionally, when someone deliberately does something that causes significant harm to someone else, we address that by giving them a chance to defend their actions in court and if their defence is not acceptable we penalise them. It is strange how easily we forget normal behaviour as soon as technology comes into the picture.

If you had a shower fan/light that broke, and the manufacturer supplied a new model to replace it that had a working fan but no light and also an undisclosed camera and connectivity that sent everything it saw home to the manufacturer, no-one would be debating the situation. People would be going to jail.


App review... maybe? But the review (especially on Android) would have to be much more careful than it is nowadays...


> We are not talking about patching. We are talking about updating.

No, he's talking about all auto updates. Here's the interview with the quote in question: https://archive.org/details/LundukeHourApril14RMS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: