Hacker Newsnew | past | comments | ask | show | jobs | submit | koolhaas's commentslogin

Sure, some Linux distros (or is it MacOS?) ask for confirmation before executing `rm -rf /`, which breaks the typical UX of the command but prevents bad mistakes.


Presumably, it’s done this way so they can say computers other than your personal device do not scan photos and “look” at decrypted and potentially innocent photos. And technically the original image is never decrypted in iCloud by Apple - if 30 images are flagged they are then able to decrypt the CSAM scan meta data which contains resized thumbnails, for confirmation.

In summary, I’m guessing they tried to invent a way where their server software never has to decrypt and analyze original photos, so they stay encrypted at rest.


Apple frequently decrypts icloud data including photos based on a valid warrant. This new local scanning method does not stop apple from complying and decrypting images like they have for years.

https://www.apple.com/legal/privacy/law-enforcement-guidelin...

(Note: I have worked with law enforcement in the past specifically on a case involving Apple and two iCloud accounts. You submit a PDF of the valid warrant to Apple. Apple sends two emails one with the iCloud data encrypted. A second email with the decryption key.)


Of course, but it's a kind of last resort thing to support a valid legal process they cannot (and probably don't want to) skirt around. They also publish data on warrant requests.

To me it's pretty clear they are doing the absolute minimum possible to keep congress from regulating them into a corner, where they lose decision making control around their own privacy standards. The system they came up with is their answer for doing it in the most privacy conscious way (e.g. not decrypting user data in icloud) while balancing a lot of other threat model details, like what if CSAM-hash-providing organizations provide img hashes for a burning American flag, and lots of other scenarios outlined in the white paper.


calling resized thumbnails metadata is a bit of a stretch imo.

Surely that's just the data, but resized?


Yes I agree, bit of a stretch. Based on their whitepaper, it's a smaller version of the original image, I guess just large enough to support the human verification step.

But I'm unsure that the thumbnail is included with every CSAM "voucher" -- it's likely only included when you pass the 30 image limit. Need to read that section more clearly.


A thumbnail is included with every safety voucher. However, it is encrypted with a key that resides on your hardware and is unknown to Apple. So Apple doesn't have enough information to decrypt your thumbnails at will.

A secret sharing scheme is used to drip-feed Apple the key: each time a positive match occurs, Apple learns a bit more about your key. Once the threshold is reached, Apple will have learned enough to recover your encryption key, and will be able to use it to decrypt all your matching thumbnails at once.


Fascinating, thanks for clarifying.


> Based on their whitepaper, it's a smaller version of the original image,

I seem to recall that the white paper speaks of a "visual derivative" without specifying it further.


The Technical Summary uses "visual derivative" without clarification, but their Threat Model PDF clarifies it further as thumbnails:

>The decrypted vouchers allow Apple servers to access a visual derivative – such as a low-resolution version – of each matching image.

https://www.apple.com/child-safety/pdf/Security_Threat_Model...


Resized thumbnails aren't a stretch... they're a scale. Bum dum tiss.... I'll let myself out


Interesting technical problem/solution. Another benefit is saving on millions of server computations when modern iOS devices have neural chips etc.

I suppose folks who don’t like privacy implications can downgrade to an iPhone 4 and maybe it will not support the feature.


Or turn off iCloud syncing of photos.


What word do you use when someone unrightfully gains possession of something that isn’t theirs?

Btw a lot of words in English have multiple meanings, and transform meaning over time, which can be confusing sometimes. For example, in baseball you steal a base, which was being protected by the other team, but you don’t remove the base from the field and run off with it.

I think steal works better than copy here, more accurately conveying meaning and intention, and unjust access.


I think the reason "steal" can feel strange here is that we've spent the last 15 years arguing that copyright infringement is "not stealing" because the original creator has not been deprived of anything.

The phrase "not stealing" is almost exclusively used in this context on HN: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


I think it’s context dependent, just like other uses of the word steal. With copyright infringement, internet communities have come to agreement that it is not stealing, so avoiding the use of the word in that context is important. In baseball it’s not, and neither with identity theft. With illegally obtained private photos, never intended to be shared or released to the world, is there a better word? It’s such a different scenario, the only similarity I see is both involve files on a computer.


2023 Q1: each TLD you make web requests to will need individual human moderated entitlements.


Wait, even to develop/test on your own device, without releasing, you need to fill out the form?


> Note: You can test your app using the iOS and iPadOS simulators without an active entitlement, but using multicast and broadcast networking on physical hardware requires the entitlement.

-- https://developer.apple.com/news/?id=0oi77447


Science supports that the sun really damages your skin, so you should absolutely use a sunscreen or SPF 30 moisturizer every morning.

Digging deeper into types of ingredients that block sun, things get a bit more tricky. In general I’ve read dermatologists say “physical” sun screen ingredients are best, like zinc oxide, because they aren’t absorbed through your skin like a “chemical” ingredient. But they also leave your skin looking more white.

And beyond that, just wash your face in the morning and night with a face wash product, not bar soap. Something simple from Neutrogena (Liquid Neutrogena).


If you really are using sunscreen every day, you might want to have your vitamin d levels checked. Low vitamin d is associated with all sorts of adverse health outcomes.


I used to think it was a simple as using the sunscreens with metals in them to reflect the sun and the whiter you look, the more effective, but it looks pretty complicated when I did more research on how UVA and UVB are blocked.

https://www.healthline.com/health/beauty-skin-care/best-suns...

https://onlinelibrary.wiley.com/doi/10.1111/phpp.12214

https://www.mayoclinic.org/healthy-lifestyle/adult-health/in...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3781714/

For one, it seems legislation to study the ingredients in the US was only passed Nov 2019. Even the size of the titanium and zinc particles affects the effectiveness.

The Mayo Clinic link is probably the best summary, to just use something and avoid peak sun hours. But it seems the non metal sunscreen ingredients are also important to project via absorption.


I would say, it doesn’t really fix npm. It has its own centralized npm repo, called deno land, and it’s own package.json, called deps.ts.

Because it can accept any URL as a dep in the source files, you can in theory do some cuter, weirder things with it. But security wise I’d argue arbitrary URLs, for critical, high traffic deps, are harder to fix when bad things happen, without centralized control.


This something I really don’t understand about Deno. I feel like I must be missing something.

How do I pin the versions of my dependencies? i.e. where is the package lock file?

If the idea is that every source file will specify the version it wants of every dependency, that seems unmanageable. Or if every source file just imports the latest version of its deps, how do I get reproducible builds?

I want a lockfile with an explicit manual step to update dependencies. “npm ci” seems to work well. I don’t see how Deno improves on it, quite the reverse.

Edit to add: hmm, there are some docs here that look relevant: https://deno.land/manual/linking_to_external_code/integrity_... But this reads as “if you really want package integrity, here are some awkward commands you can run to do it”. I strongly feel this should be the default behaviour that the tools should steer you towards. And in the examples on this page, the source code does link to specific library versions; I have a hard time accepting that that’s a good idea, except possibly for major versions.


yeah, it's a bit awkward and you have to dig through the docs to find it. We're thinking about making it the default behavior.


Please do! I think that would be a big win.

The package integrity and import maps sections of the docs look like they do everything that’s needed, they just look fiddly to use correctly.

Maybe it just wants something as simple as an optional .denorc at the root of your project to set the default flags?


If you want to use locked version deps from VCS you point to a specific tag, not a branch IIRC.


deno.land/x is _not_ a central registry. It's something we maintain as a convenience to the community, but we actually go to great lengths to make sure it doesn't receive preferential treatment from the CLI. Importing modules from nest.land, skypack.dev, esm.sh or jspm.io is common in the ecosystem and is something we're looking to keep encouraging.

It's also pretty easy to vendor in your dependencies so that they don't move between the time you commit and the time your server starts. We also support lock files so you don't _have_ to vendor your deps too. Versioning is up to the server you import from, but typically you'd put version in the URL somewhere (ideally a pinned version).

Security-wise, There are other articles out there that detail this but it's not fundamentally less secure than importing from npm as you're still pulling a JavaScript file from the internet in both cases. The cool thing here with URLs is that it's pretty easy to audit and create an allowlist of known-good and trusted servers to import from in your org.

As for vulnerability reporting & patching; I think we're still lacking a good vulnerability database, that much is true, but fixing deeply integrated deps that have vulnerabilities is pretty easy using import maps really.


> It's also pretty easy to vendor in your dependencies

It's not the first time I see this claim in this discussion. Describe "easy".

In the case of the much-hated npm, or yarn, it's as easy as:

   npm/yarn install package
This will pull both the package and all its dependencies. It will create a lock file and will essentially lock the version. There are additional ways to specify the exact versions, and version ranges for packages and dependencies.

Additionally, it's quite trivial to make npm/yarn to pull not from npm, but from, say, a private package registry that only has vetted and audited packages.

So. Given that "it's easy to vendor in your dependencies", how will all this look in Deno?

We already know that even such a simple thing as lockfiles is, multiple manual steps with awkward parameters to the cli that people shouldn't forget [1]. This is... not easy.

[1] https://deno.land/manual/linking_to_external_code/integrity_...


Thanks for the response. And thank you for clarifying that there is a larger ecosystem of package repositories, and that Deno does not give preferential treatment to any. In theory npm can do the same, but of course there is official support and community gravitational pull around a single service.

I agree there is nothing fundamentally less secure in general, but what you don’t get is being able to standardize around security for dev account protection, policies around immutability, DNS stuff, and some other centralized security measures. Neither are bullet proof, but there are some things you can’t protect against with random URLs.

I’d argue URLs are fine until you get massive use of a single package and it weaves itself into a complex dependency tree across multiple other critical projects. Then you worry about the what if’s.


> I’d argue URLs are fine until you get massive use of a single package and it weaves itself into a complex dependency tree across multiple other critical projects. Then you worry about the what if’s.

This is already an issue with npm. My personal take on this is that at that point that dependency should be vendored as much as possible but obviously it's hard to fight the existing inertia on this one. Also worth noting that the [std lib][1] is an attempt at a pragmatic solution to the problem where these foundational packages that are seemingly used by every framework out there essentially converges to the standard lib. I agree it's not perfect at the moment but it's a start.

[1]: https://deno.land/std


>“ant colony” -> ”underground creepy crawly settlement”

Wow, and it passed peer review.


It passed squint survey.


> Apple has always been able to scan iCloud and send your data to police

I don’t see how the article you linked to explains how Apple can “scan iCloud” for the police. What do you mean? It seems like they just hand data over for a specific warrant related to individual users.


Sorry, are you insinuating that Apple has the ability to retrieve, read, and provide the unencrypted data from users' iCloud accounts, but not the ability to search across that data?


> Data generated by a node "flows" to the other nodes it is connected to and when any node has data on all it's inputs, it executes and produces data which flows to whatever nodes it's connected to.

I think you just described flow programming. There are at least 10+ visual languages / environments that work like this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: