> The very fact people think they need to read (fiction) books released this year more than ones released before is baffling.
Do they really?
I was comparing rates of production vs consumption. It doesn't follow that what is being consumed on a given year is this year's production.
My guess is that most of the books written are read by hardly anyone. A few authors have a faithful following that will read their books as soon as it's out (which isn't too baffling). Reviewers and critics may indeed be more likely to review new books, which might impact people's decisions (again, not necessarily baffling). Book shops also put new books forward, but all those books tend to be the ones by trendy authors.
Other than the few fashionable books that come out each year you'll find reviewers like the one described in the article who don't seem to focus on new books (e.g. they talk about Dostoïevski), so it is not obvious that people feel that compelled to read new books.
> the backlog of books spread across millennia, not a century.
How much I agree with this! Plus, time does such a great job at filtering out the good from the bad (or the exceptional from the mundane). That's where lists of books entering the public domain, like this one [0], are important. Or the reviews [1].
Ultimately, the fact that there is more available to read than is possible even to the most voracious of readers means that most people will rely on guidance on what to read.
> Meta devised an ingenious system (“localhost tracking”) that bypassed Android’s sandbox protections to identify you while browsing on your mobile phone — even if you used a VPN, the browser’s incognito mode, and refused or deleted cookies in every session.
That's only one example, and as I explained in a sibling comment[1] doesn't even seem like something iOS designers were specifically defending against. In light of this, I think it's fair to say this example is poor and that another one is warranted. For instance, I'd consider the app tracking transparency changes to be something where iOS was doing better than Android on, but Android has since reached feature parity on that because you can delete your advertising id, which basically does the same thing.
Unless you're running Graphene or a similar security minded distro the sandboxing isn't very good. Okay let's be honest it's fairly abysmal at preventing fingerprinting. It could almost be accused of not even bothering to try.
Even with graphene I don't believe it mitigates much as far as apps collecting data. The idea for more privacy is you run open source apps instead that just don't collect data.
AFAIK Graphene is oriented towards strong device security with privacy as more of a side effect.
One thing with the sandboxed Play Services being that Google has fewer permissions on the device, so presumably they can collect less data.
Which I believe is GrapheneOS' argument when people praise microG: microG being open source does not fundamentally add privacy: apps using microG will phone to Google's servers (that's the whole point of microG). What microG solves is that it removes the Play Services that are root on your device, and it turns out that sandboxed Play Services do that as well.
> The idea for more privacy is you run open source apps instead that just don't collect data.
Yep exactly, I just wanted to add about the sandboxed Play Services, because it was not obvious to me at first :)
> Unless you're running Graphene or a similar security minded distro the sandboxing isn't very good. Okay let's be honest it's fairly abysmal at preventing fingerprinting.
Hmm... the sandboxing is a security feature, it's not there to prevent tracking (not sure what "fingerprinting" includes here). The sandboxing of Android is actually pretty good (a lot better than, say, desktop OSes).
There is pretty much nothing you can do against an app requesting e.g. your location data and sending it to their servers. Fundamentally, the whole goal of apps is that they can technically do that. Then you have to choose apps you trust, and it's easier to trust open source apps.
What GrapheneOS brings in terms of sandboxing is that the Play Services run sandboxed like normal apps. Whereas on Android, the Play Services run with system permissions.
Color me surprised. But if you run the app using the sandboxing feature that it provides surely it will only be able to see other apps installed within that same sandbox?
What is "the sandboxing feature" you're talking about? The standard app sandbox built into android allows apps to discover each other for various purposes, and grapheneos doesn't do anything to attempt to plug this.
Apologies. I was thinking of Android user profiles which are available from mainline and (AFAIK) prevent the linked workaround from revealing any apps not installed in the same profile. So it's an example of an unfixed leak in Android but not (as I had previously implied) something that Graphene corrects.
Honestly the state of anti-fingerprinting (app, browser, and otherwise) is fairly abysmal but that's hardly limited to android or even mobile as a whole.
>Apologies. I was thinking of Android user profiles which are available from mainline and (AFAIK) prevent the linked workaround from revealing any apps not installed in the same profile.
But there's no evidence that stock android leaks apps installed across profiles? The link you provided doesn't discuss profiles at all, and stock android also has private space and work profile just like grapheneos.
> The IOS versions of social media apps extract way less data from the device than on android, and is thus more privacy friendly.
I seriously doubt this. I agree that this is the perception but anyone working in the mobile space on both platforms for the past ~2 years will know Google is a lot more hard nosed in reviewing apps for privacy concerns than Apple these days (I say this negatively, there is a middle ground and Apple is much closer to it - Google is just friction seemingly in an attempt to lose their bad reputation).
Last time I tried Android I had to sign my rights away to everything the app wanted just to install it.
In contrast, on iOS I get prompted to allow or deny access to my information when the app tries calling Apple’s API to fetch that information.
For example, if an app wants access to my contacts to find other people using the app. On iOS I can simply say “no” when it prompts me to allow it to read my contacts. I lose out on that feature to find other people using the app, which I don’t care about, but I can still use the rest of the app. On Android it seemed like by installing the app, I had already agreed to give up my contacts… it was all or nothing. If I don’t like one privacy compromising feature, I couldn’t use the app at all.
Android may have improved this in the last few years, but I found it to be a dealbreaker for the entire platform.
> Last time I tried Android I had to sign my rights away to everything the app wanted just to install it.
Sounds like it was years ago... I remember that it was being introduced like... more than a decade ago? Of course maybe it took longer than iOS because of how Android works. iOS can just force everybody to use liquid glass with one update, Android has to think more about backward compatibility.
You still have the same things on android. If an android app requests eg exact location it can refuse to run and there’s nothing you can do.
That sort of behaviour is prohibited on iOS and an app won’t be approved if it does that sort of thing. They have to allow declining location permission or at least approximate location
Not sure I understand. So you're saying that a bad app on Android can request all permissions and tell you that it will refuse to run unless you give them, and the same app would be declined on iOS?
I could agree with that, Apple is more picky. Now personally, if an app does that, I uninstall it.
But technically, the Android rules are that you shouldn't do that, and when you request a permission you need to explain to the user why you request it.
It was there for the launch of the App Store with iOS. They didn’t have to worry about backward compatibility, because they took the time to worry about user privacy and app developer overreach from the very start.
A difference is also that Apple has 100% control over the hardware and can enforce their updates much better than Android.
Android has to deal with tons of devices, and allow developers to update their tooling while supporting older devices. I actually find it quite impressive how they manage to do that. Must be difficult.
All the more reason to get the design right out of the gate, instead of throwing something out there and hoping to fix it later. Especially something so fundamental, like privacy.
It would be nice if the app stores offered different levels of requirements. Let the market decide how much it cares about privacy (and security, and ...), reduce the friction for developers who want to do a particular thing, and give end users more confidence in the entire system.
You'd think this would be more known! I feel like general sentiment says the opposite is the case.. What can one point to in the future to show what you are saying here?
It isn't really type inference. Each closure gets a unique type. Rather it's an automatic decision of what traits (think roughly "superclasses" I guess if you aren't familiar with traits/typeclasses) to implement for that type.
No, I don't think so, not unless there's some feature of Haskell type classes I'm completely unaware of.
If anything it's closer to SFINAE in C++ where it tries to implement methods but then doesn't consider it an error if it fails. Then infers type-classes based on the outcome of the SFINAE process. Or the macro analogy another poster made isn't bad (with the caveat that it's a type system aware macro - which at least in rust is strange).
I am not sure how Haskell works but I think what the previous poster meant is that the types get determined at compile time. Closures are akin to macros except you can't see the expanded code.
The problem is that most aren’t good, and bad ones can take a lot of effort to distinguish, if they look plausible on the surface. So the potentially good ones aren’t worth all the bad ones.
I agree with most of them being bad, I disagree with them taking lots of effort to distinguish, and I am maintainer unfortunately receiving receiving more and more using AI.
i was one-shotting voices years ago that were timbre/tonally identical to the reference voice; however the issue i had was inflection and subtlety. I find that female voices are much easier to clone, or at least it fools my brain into thinking so.
this model, if the results weren't too cherry picked, will be huge improvement!
I've made a bunch of nontrivial changes (+- 1000s of characters), none of them seems to have been reverted, never asked for permission, I just went ahead and did it. Maybe the topics I care about are so non-controversial no one actually seen it?
there are many copy editing projects that do this.
If you mean the left leaning tone / bias, that will be a bit more spicy. But general grammar, tone, ambiguity , superlatives – that’s the goal of copy editing.
> If you mean the left leaning tone / bias, that will be a bit more spicy. But general grammar, tone, ambiguity , superlatives – that’s the goal of copy editing
No, no I mainly mean non-neutral phrasing and/or too personal. Especially for people’s articles. (“And they released that greeeat album! But unfortunately the critics did not understand them… Booh!)
I agree. Wikipedia Cleanup is a good starting point. Or look for a Wiki Project to join.
I've found the best way to learn and contribute is to jump into an existing project. Usually direction is the hardest thing .
You can of course dive into an article and make changes, but you'll often get pushback (warranted or unwarranted) and that can be discouraging. It's a somewhat natural feedback loop.
Pure styling work is boring and not worth human time anymore (for "good enough" result, starting from bad) IMO, considering how "good" AI has become to that (while still being assisted).
reply