Hacker Newsnew | past | comments | ask | show | jobs | submit | an_ko's commentslogin

There's context. Hank Green talked about it in https://www.youtube.com/watch?v=9zi0ogvPfCA, but in short, paraphrasing, and adding my own thoughts:

Jimmy Wales has been poked at with the question of whether he should call himself a founder or specifically co-founder for a long time, by right-wingers who think Wikipedia is too woke, and want to irritate and discredit him as much as possible, and instead raise up his co-founder Larry Sanger. Sanger has right-wing views and a habit of accusing any article as biased that doesn't praise Trump and fundamentalist Christian values, and takes these as proof that Wikipedia has a left lean.

The interview Wales walked out of was for his book tour. I imagine it's the umpteenth interview that week with the same question asked for the same transparently bad-faith reasons, trying to bend the interview away from his book and into right-wing conspiracy theory land.


> Jimmy Wales has been poked at with the question of whether he should call himself a founder or specifically co-founder

Not surprising! Are we setting aside how deceitful his answer his? Claiming all credit for a collaborative accomplishment -- which he does by adopting the "founder" title -- would rightfully provoke "poking" by interviewers. I can't imagine an interview not addressing a question that is so pertinent to Wales' notoriety. They literally cannot properly introduce him without confronting it! To say those interviewers are acting in "transparently bad-faith" comes across to me as plainly biased.

Sanger's politics don't change this, either. It might be the case that you have to concede on this to people you politically disagree with.


Wales actually covers this at length in his book: https://en.wikipedia.org/wiki/The_Seven_Rules_of_Trust

He himself admits it's a complicated situation, and argues both his own and Sanger's position.

Combined with the context provided by all the parent comments here, it's quite difficult to argue good faith given the interview was also specifically on the book tour. There are many different and actually productive ways the interview could have talked about the conflict between Wales and Sanger.


> Not surprising! Are we setting aside how deceitful his answer his? Claiming all credit for a collaborative accomplishment -- which he does by adopting the "founder" title -- would rightfully provoke "poking" by interviewers.

I went down the rabbit hole on this a while back and came away with the impression that it's complicated. And whether or not Wales is being deceitful hinges on pedantic arguments and mincing of words. Should Wales be referred to as "a founder", "co-founder", or "one of the founders"? It's not as if he's titling himself "sole founder". And Sanger is still list on his Wiki page and the Wikipedia pages as a Founder.

It should also be noted that Sanger was hired by Wales to manage Nupedia, and that Wikipedia was created as a side-project of Nupedia for the purpose to generating content for Nupedia. Does the fact that Sanger was an employee of Wales, and that Wikipedia only exists because Sanger was tasked with generating content for Nupedia impact his status as a founder? Would Sanger or Wales have gone on to create a wiki without the other?

Can Steve Jobs claim to be the creator of the iPhone since he was CEO at the time it was created at Apple?

At the end of the day Sanger was present at the ground breaking of Wikipedia but was laid off and stopped participating in the project entirely after a year. He didn't spend 25 years fostering and growing the foundation. He did however try to sabotage or subvert the project 5 years later when it was clear that it was a success. Interestingly he tried to fork it to a project that had strong editorial oversight from experts like Nupedia which flies in the face of the ethos of Wikipedia.


> And whether or not Wales is being deceitful hinges on pedantic arguments and mincing of words.

A big piece of this is that “founder” is actually a very unusual title to use here. Normally someone would “create a product” and “found a company”. Wikipedia is not a company. It’s not even the name of the foundation. It’s a product.

It’s kind of like Steve Jobs saying he founded the iPhone.

> He didn't spend 25 years fostering and growing the foundation.

Which isn’t however relevant to the title “founder”.


> Wikipedia is not a company. It’s not even the name of the foundation. It’s a product.

I'm inclined to agree with you but there are plenty of examples of founders of products: Matt Mullenweg, Dries Buytaert

> Which isn’t however relevant to the title “founder”.

I think it establishes credence for the claim. If Sanger's contributions warrant being called Co-Founder, then so too do Jimmy Wales.

The core arguments are "you shouldn't claim to be founder of a product" and "claiming to be founder implies sole founder". This is why I say it breaks down to mincing words.


> I'm inclined to agree with you but there are plenty of examples of founders of products: Matt Mullenweg, Dries Buytaert

Fair, but I do think the distinction between the company and the product is relevant. Wales’s claim to be the sole founder of Wikipedia relies specifically on muddying these two notions.

My recollection is that Wales has claimed that Sanger doesn’t qualify as a founder because he was an employee. OK, except Wikipedia is not an employer. If Jimmy Wales qualifies as the founder of Wikipedia specifically because of his ownership in the company that initially funded it, then the other founders of Bomis would seem to also be Wikipedia cofounders.

On the other hand, if being a founder of Wikipedia actually means being instrumental in the creation of the product, then Sanger seems clearly a founder.

Mixing and matching across two different definitions to uniquely identify Wales alone seems very self-serving and inconsistent.

To be clear, I’m not really disputing anything you said here. Just kind of griping about Wales’s self serving definition of founder.

> I think it establishes credence for the claim. If Sanger's contributions warrant being called Co-Founder, then so too do Jimmy Wales.

I don’t know if anyone has claimed Wales should not be considered a cofounder. I think the general question is specifically whether he is the only founder. In this interview, he introduced himself as “the” founder.


> I don’t know if anyone has claimed Wales should not be considered a cofounder. I think the general question is specifically whether he is the only founder. In this interview, he introduced himself as “the” founder.

I don't think that he was claiming to be sole-founder and I don't think claiming to be founder implies you're the sole founder. The choice of "the" over "a" though does have some implication, and his intentional choice to use "the" might have been to avoid broaching the subject of Sanger. It's clearly a touchy subject for him.

And at the same time if Steve Jobs or Bill Gates were introduced as the founders of their respective companies I personally wouldn't think much of it.

At the end of the day, the Wikipeida pages on Wikipedia and Sanger credit Sanger appropriately so the it's not as if Wales is exerting his will to erase Sanger or his contribution. He just gets pissy when you bring it up.


In the specific case, this is a long running thing. Historically Wales has in fact dismissed Sanger as being a founder and presented himself as the sole founder. That’s why the interviewer poked at this immediately. It’s also why Wales got so annoyed, because he’s had probably this exact same conversation a million times and didn’t want to do it again.

If Bill Gates called himself “the founder” of Microsoft, people would probably dismiss it as a slip of the tongue. For Wales, I don’t think it was a slip of the tongue at all. It’s an intentional choice. I don’t agree with his interpretation, but I also don’t think he’s obligated to rehash the topic in every single interview.


The inability of wealthy people to take responsibility for themselves and instead blame their own bad behavior on the mere existence of Trump is getting exceptionally thin.

Credit your co-founders. Even if you don't agree with them anymore. There's no excuse not to.

If you've been asked the question a lot then you should be _very good_ at answering it by now.


Ok, but Tilo Jung is the absolute opposite of right wing

yes, but question can be done in different ways. and tilo jung always at least, not cared, if his questions are offensive... or trying to up the interviewed person

a group of people seems to think, that journalists should trip up people, like in interrogations, instead of being hard in the topic but nice in the tone.


Yeah, that sentiment surely exist that PR and journalism is not the same. Some would even argue that journalism should try to find facts and that being particularly pleasant and nice with doing so is secondary to the goal of fact finding, it’s not PR after all. One could even go as far as to speculate that a journalist being “nice” is not genuine but just a method to gain information. I know I am biased here as this is how I want it to be.

The case if Tilo is quite specific, his interview style uses methods that are effective and uncommon and in part extremely unpleasant, but super effective in making people a accidentally confess to him whilst forgetting all their media training.


I would have expected at least some consideration of public perception, given the extremely negative opinions many people hold about LLMs being trained on stolen data. Whether it's an ethical issue or a brand hazard depends on your opinions about that, but it's definitely at least one of those currently.


I made the mistake of first reading this as a document intended for all in spite of it being public.

This is a technical document that is useful in illustrating how the guy who gave a talk once that I didn’t understand but was captivated by and is well-respected in his field intends to guide his company’s use of the technology so that other companies and individual programmers may learn from it too.

I don’t think the objective was to take any outright ethical stance, but to provide guidance about something ostensibly used at an employee’s discretion.


He speaks of trust and LLMs breaking that trust. Is this not what you mean, but by another name?

> First, to those who can recognize an LLM’s reveals (an expanding demographic!), it’s just embarrassing — it’s as if the writer is walking around with their intellectual fly open. But there are deeper problems: LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too? The reader can’t be sure — and increasingly, the hallmarks of LLM generation cause readers to turn off (or worse).

> Specifically, we must be careful to not use LLMs in such a way as to undermine the trust that we have in one another

> our writing is an important vessel for building trust — and that trust can be quickly eroded if we are not speaking with our own voice


I use unwrap a lot, and my most frequent target is unwrapping the result of Mutex::lock. Most applications have no reasonable way to recover from lock poisoning, so if I were forced to write a match for each such use site to handle the error case, the handler would have no choice but to just call panic anyway. Which is equivalent to unwrap, but much more verbose.

Perhaps it needs a scarier name, like "assume_ok".


I use locks a lot too, and I always return a Result from lock access. Sometimes an anyhow::Result, but still something to pass up to the caller.

This lets me do logging at minimum. Sometimes I can gracefully degrade. I try to be elegant in failure as possible, but not to the point where I wouldn't be able to detect errors or would enter a bad state.

That said, I am totally fine with your use case in your application. You're probably making sane choices for your problem. It should be on each organization to decide what the appropriate level of granularity is for each solution.

My worry is that this runtime panic behavior has unwittingly seeped into library code that is beyond our ability and scope to observe. Or that an organization sets a policy, but that the tools don't allow for rigid enforcement.


> the handler would have no choice but to just call panic anyway

The handler could log the error and then panic. Much better than chasing bad hunches about a DDoS.


Could you elaborate on what those "maybe more powerful reasons" are?


No, I don't feel qualified. But it looks to me that there were times when challenging authorities and questioning general opinion was cool, and these times ended before social media kicked in. Maybe urbanization and generally people not staying in one place long enough are to blame, not sure.


Such people should perhaps consider a ceramic-bladed knife. They stay sharp basically forever because the blade is extremely hard, with the downside that it's not repairable with ordinary equipment if chipped. But if the owner would never maintain their metal knife anyway, then it's not _really_ a downside.


They do not stay sharp.

On hard material and when overloaded, they will chip in large, unfixable chunks.

On softer material, they continuously sharpen their edges at a microscopic scale, fracturing away tiny chips as they're worn, to new glassy ceramic molecular edges. A well used ceramic blade becomes micro-serrated.

This sounds fantastic until you think about what is happening to the shards of hard glassy ceramic which briefly become part of your food before becoming part of your gastrointestinal tract.


How much mineral and metal grit does one consume on a regular basis? The ceramic material from a ceramic knife blade is, obviously from just looking at it, very small. I bet the amount of grit I've eaten from having a taste for raw oysters vastly outweighs what I'd get from a lifetime of using ceramic knives.


"grit" is usually spherical or close to it, because it has seen chemical and mechanical weathering. It is often calcium crystals of some type.

This is neither. These are long dagger shapes, significantly larger than a diatom, very hard, sharpened to a fine edge.


They aren't giving you 'microcuts' in your gut, any more than anything else.

Why? The force applied isn't in a uniform downward direction like you would with a knife.


Grit is shaped differently than ultra sharp shards.


People can work up to eating fairly large shards of glass. Eating tiny bits of ceramic occasionally are unlikely to be an actual issue any more than ingesting a little bit of sand.


> People can work up to eating fairly large shards of glass.

Sorry, what? Could you perhaps elaborate on this a bit?


Glass eating is a real thing with a surprising number of documented cases. In some cases it’s classified as Hyalophagia a form of Pica where people focus on glass, but it doesn’t necessarily have significant negative side effects. https://en.wikipedia.org/wiki/Pica_(disorder)

There’s also a magic trick where people eat sugar that’s very clear and looks like glass, but that’s a different thing.


Your defense of how safe eating glass is, is to point out that mentally ill people sometimes do it?


No, I’m saying some mentally ill people consume vast quantities of glass and medical professionals are only concerned with the most extreme cases. It’s like saying the forces involved in a boxing match are a useful benchmark for brain trauma, on that scale a 6 month old infant punching you is so far below that benchmark you don’t need to worry about it.

Which means if you’re worried about consuming 1/100,000th as much it’s clearly not a big deal.


I saw this guy eat a Dell PC once:

https://en.wikipedia.org/wiki/Michel_Lotito


If your body didn't have ways to deal with sharp things you eat, we'd never eat fish due to the risk of pin bones. Microscopic shards of ceramic pose very little risk.


Unsubstantiated claim has me convinced!


Found the asbestos salesman.


I used to be a fan, and used them heavily for years. They stay sharp... for a while, and then there's no practical way to re-sharpen them. You get a couple good years out of them and then a lot of mediocre to bad years.

Running a steel knife through an electric sharpener once a month (a 2-minute operation) keeps it feeling consistently like new.


Are there consumer electric sharpeners that you'd recommend? My local grocery store does knife sharpening but it's not super convenient


I use one of these: https://www.ikea.com/us/en/p/skaerande-knife-sharpener-black...

It's cheaper than an electric sharpener and doesn't carry the risk of taking off too much material from a blade due to overenthusiastic use.

I am 100% certain that there are multiple people on this thread that could tell me I'm getting less optimal results than their tools and/or method. I don't care. I'm getting results that work when I cook. I don't trust myself to get the angle right with a diamond stone.


Pull through sharpening creates an edge that does not last. This channel has great explanations about this and what to do instead: https://youtu.be/pagPuiuA9cY


That is an amazing video. I can confirm that the methods are correct, he mentions exactly what I've been doing for years, explains and demonstrates very clearly.


I linked this elsewhere on the page:

https://www.amazon.com/ChefsChoice-EdgeSelect-Professional-S...

I have one, I use it on my knives every 1-2 months. My knives will last decades rather than "lifetime" but I don't care... they're always sharp and I don't have to work at it. I can buy new knives.



You cant use them to chop stuff hitting the board. I have used ceramic knives - they are cool but they indeed chip.


Why does this stupid idea have to be killed so many times? Being watched constantly means living in fear, but seeing it almost become legally mandated practice over and over again is itself a form of living in fear. I'm so tired.


Almost as if there should be a rule that disallows introducing the same proposal multiple times under different names.


AFAIK there is a rule like that, but it needs that the proposal was voted down. Chatcontrol was never voted on by the EP. They bring it up every half a year and it would not be voted on until they can be sure it would pass.


Open source will find a way around it; there's no reason to be afraid. Unless one day their spyware is installed at the hardware level but that would be the equivalent of raiding our homes, so that's not very likely.


Many OSS developers live in Europe and the logical next step will be throwing them in jail for production of illegal software.


open source wont save you. if they make encryption without backdoor illegal they will just throw you in jail


It'll definitely save the criminals.

Selling drugs on the Internet is also illegal. Selling them in real life too. How many people are doing it still? Doesn't seem very effective, this solution.



They will once they catch you doing it, yes. The people who this _won't_ stop however, are criminals.


Seeing that carve out for Google with non-licensed androids being banned from the age verification app., it looks like there's going to be a heavy swing towards trusted devices and apps. in general. Perhaps untrusted ones will be blocked from carrier networks. And then what? You'll be able to have phones with your own custom apps/chat installed that's useless, or phones that are useful but you're stuck with only official, compromised apps.


It was more misunderstood than stupid.

Ironically when Apple introduced their solution it was actually better than what we have now. It was interesting to watch people lose their minds because they didn't understand how the current or proposed system worked.

Current system everything can be decrypted on the cloud and is scanned for CSAM by all ISPs/service providers.

Apple wanted the device to scan for CSAM and if it got flagged, it allowed the file to be decrypted on the cloud for a human to check it (again, what happens now).

If it didn't get flagged then it stayed encrypted on the cloud and no one could look at it. This not only was a better protection for your data, it has a massive reduction in server costs.

CSAM is also a list of hashes for some of the worst CP video/images out there. It doesn't read anything, just hash matching.

The chance of mismatch is so incredibly small to be almost non-existent.

Even so the current CSAM guidelines require a human to review the results and require multiple hits before you are even flagged. Again this is what is happening now.

Personally I'm against having any agency the ability to read private messages, while at the same time I fully agree with what CSAM is trying to do.

Realistically if countries want to read encrypted messages, they can already do so. Some do too. The fact that the EU is debating it is a good thing.


So what would stop the list of hashes from being extended with hashes of copyrighted media, evidence of corruption (labelled slander or an invasion of the perpetrator's privacy) or evidence of the preceding abuses of the system themselves?

Once you have an established mechanism for "fighting crime", "don't use it to fight that type of crime" is not a position that has any chance of prevailing in the political landscape - see also all the cases of national security wiretaps being used against petty druggies.


Hashes don't really work that way. They don't actually give any high level of a photo's contents. You can't ask a hash to find all photos of a certain document or a meeting or anything like that. They really only detect exact copies, which makes them somewhat useful only for the most basic of copyright infringement (i.e. proving someone has a copy)


As far as I remember, Apple's proposal was to involve https://en.m.wikipedia.org/wiki/Perceptual_hashing which is meant to sidestep this exact problem - and either way, your objection would be equally applicable to CSAM. There is no mechanism that works better for it than for copyright enforcement.


> So what would stop the list of hashes from being extended with hashes of copyrighted media, evidence of corruption (labelled slander or an invasion of the perpetrator's privacy) or evidence of the preceding abuses of the system themselves?

Absolutely nothing. But they are already scanning hashes now on everything.


The problem with the CSAM detection is that it can be used for adversarial purposes as well. For example if someone decides an image is politically inconvenient and pressures it to be blocked by hash, then Apple may have to comply or remove themselves from an entire market. Building the mechanism to do that is not acceptable in a civilised society.

And of course does this really solve the real problem of child exploitation? No it doesn't. It allows performative folk working for NGOs to feel like they've done something while children are still being abused and it is being covered up or not even investigated as is so common today.

Improving policing and investigatory standards is where this should stop. We already have RIPA.

All this does is create the expectation that a surveillance dragnet is acceptable. It is not.


> Building the mechanism to do that is not acceptable in a civilised society.

This mechanism has been in production for many years on all service providers.

Example:

- Microsoft since at least 2009.

- Google since at least 2013


> Realistically if countries want to read encrypted messages, they can already do so.

How? Are you implying adynchronous and synchronous encryption is broken? Because last time I checked since Snowden our encryption is basically the one single thing in the whole concept of the internet that has been done very right, with forward secrecy and long term security in mind. AFAIK there are no signs that someone or something has been able to break it.

Also, the solutions you present do imply that someone already has the private key to decrypt. Sure, they'll say they'll just decrypt if your a bad person, but the definition of a bad person changes from government to government (see USA), and from CEO to CEO. Encryption should and mostly is built on zero trust and it only works with zero trust. Scanning, and risking the privacy of billions and billions of messages by having the key to read them because there have been some bad actors is fighting a fly with a bazooka. Which sounds funny overkill, but, fun fact, it also just doesn't work. It destroys a lot, and gains nothing.

I don't have a better solution for the problem. But this solution is definitely the wrong one.


> How? Are you implying adynchronous and synchronous encryption is broken?

Not at all.

You make encryption a crime. You ban certain apps. It won't stop people using encryption but that doesn't matter. Because just the act of using it makes you a dissident that can be dealt with.

That is currently the process in Iran and Egypt for example.

Even if they can't read the message and it's not illegal you can still be guilty by association. The act of sending a message can be tracked.

There has been countless situations of that even outside the realm of instant messaging.


>How?

A couple of guys with 5$ wrenches can be pretty effective at extracting cryptographic secrets.


Not on scale, though. Plus, this leaves some quite visible traces and leads to backlash.

That is like saying that Guantanamo can defeat religious terrorism. In individual cases, yes, on the whole, absolutely not.


I mean yeah, why break crypto when you can break kneecaps?


> CSAM is also a list of hashes for some of the worst CP video/images out there. It doesn't read anything, just hash matching.

The list presumably contains CSAM hashes. However, it could also include hashes for other types of content.

AFAIK the specific scope at any point in time is not something that can be fully evaluated by independent third parties, and there is no obvious reason why this list could not be extended to cover different types of content in the future.

Once it is in place, why not search for documents that are known to facilitate terrorism? What about human trafficking? Drug trafficking? Antisemitic memes spring to mind. Or maybe memes critical of some government, a war, etc.

This is because, despite the CSAM framing, it is essentially a censorship/surveillance infrastructure. One that is neutral with regard to content.


CSAM scanning has been around for at least 15 years. All service providers are required to do it by law.

You are absolutely correct with your "what-ifs" and this underlines the need for more oversight and transparency.

The process (my knowledge is a few years old) is that service providers or Law enforcement from countries can submit files to the CSAM database.

The database is owned by National Center for Missing & Exploited Children (NCMEC).

Once they receive the files they review them and confirm that the files meet the standard for the database, document its entry, create a hash and add that to the database. After that the file is destroyed.

This whole process requires multiple approvals and numerous humans review the files before the hash goes into the CSAM.

Also every hash has a chain of custody. So in the event of an investigation they know exactly everyone who was involved in putting that hash into CSAM.

So it's possible to submit an image that is not what CSAM is intended for, but the chances of it even remotely getting into the database is next to nothing. To add to this service providers can be sued for submitting invalid files.


> CSAM scanning has been around for at least 15 years. All service providers are required to do it by law.

That is true for scanning in the cloud, but it's important not to conflate this with client-side scanning. The distinction between cloud and local processing is foundational. Collapsing that boundary would mark a serious shift in how surveillance infrastructure operates.

> Once they receive the files they review them and confirm that the files meet the standard for the database, document its entry, create a hash and add that to the database. After that the file is destroyed.

That is already a structural problem: If the original is destroyed, how can independent parties verify that database entries still correspond to the intended legal and ethical scope? This makes meaningful oversight functionally impossible.

Even if centralizing control in a state-funded NGO were considered acceptable (which is already questionable), locating that NGO in the US (subject to US law and politics!) is a serious issue. Why should, say, the local devices of German citizens be scanned against a hash list maintained under US jurisdiction?

> So it's possible to submit an image that is not what CSAM is intended for, but the chances of it even remotely getting into the database is next to nothing. To add to this service providers can be sued for submitting invalid files.

Procedural safeguards are good, but they don't solve the underlying problem: the entire system hinges on policy decisions that can change. A single legislative change is all it takes to expand the list’s scope. The current process may seem narrow today, but it offers no guarantees about tomorrow.

We’ve seen this pattern countless times: surveillance powers are introduced under the pretext of targeting only the most heinous crimes, but once established, they’re gradually repurposed for a wide range of far less serious offenses. It is the default playbook.


> That is true for scanning in the cloud, but it's important not to conflate this with client-side scanning.

From what you say it's clear you never read Apples paper on this.

The client puts a flag on a match. It is only verified on the server both by another scan and a law enforcement.

If the client doesn't flag a file, it can never be decrypted on the server by anyone except the device owner.

The current system just checks everything. If your device never talks to the cloud in both scenarios nothing happens.

> That is already a structural problem:

You seem to have an over simplified view of how it all works. They don't just throw hashes in.

They can verify it by the chain of custody and documentation that is stored about that hash.

> the local devices of German citizens be scanned against a hash list maintained under US jurisdiction?

CSAM is a UN protocol that has 176 countries signed onto it. Including Germany.

Many countries also have their own independent department that works with CSAM. Germany has their Federal police (BKA) that work that role. They work with NCMEC on ensuring the CSAM hashes are correct. Germany is also one of the strictest countries in relation to CSAM.

> the entire system hinges on policy decisions that can change.

Again it's an over simplification. If the US government did do that.

- It would first be challenged in the courts.

- They would not be able to hide the fact they have changed it.

- This would lead to service providers not assisting with the corrupted CSAM.

- As this is a worldwide initiative the rest of the world can just disconnect the US from the CSAM until what is put in is confirmed.

> It is the default playbook.

If they wanted to do that, the CSAM database is the worst way to do it.

I'd recommend you read up on all of it a bit more. Most of your claims are unfounded in relation to the CSAM.


> From what you say it's clear you never read Apples paper on this. ... You seem to have an over simplified view of how it all works. They don't just throw hashes in. ... Again it's an over simplification. If the US government did do that. ... I'd recommend you read up on all of it a bit more. Most of your claims are unfounded in relation to the CSAM.

The posturing about supposed expertise adds nothing. If you want to make an argument, make it. Vague appeals to technical depth are just noise.

> The client puts a flag on a match. It is [...] verified on the server [...] The current system just checks everything.

Sure, that’s how the flagging process works. It’s also beside the point. Listing technical details doesn’t change the core issue: this system performs scanning on the user device, which is what makes it problematic.

> If the client doesn't flag a file, it can never be decrypted on the server by anyone except the device owner. [...] If your device never talks to the cloud in both scenarios nothing happens.

Correct, but not relevant here. No one is arguing that airgapped devices leak information. The issue is what happens when devices are online.

> [On the structural problem of inability of independent oversight] They can verify it by the chain of custody and documentation that is stored about that hash.

What specific documentation would allow actual evaluation? And who can access it? The process is opaque by design: The list of neural hashes is private, matching and flagging happen silently, and escalation logic like threshold levels or safety voucher generation is not open to inspection. Whatever theoretical accountability might exist, it’s irrelevant in a system of systematic secrecy that cannot be independently observed or audited.

> CSAM is a UN protocol [...] countries [...] work with NCMEC on ensuring the CSAM hashes are correct. Germany is also one of the strictest countries in relation to CSAM.

Yes, Germany has police and ofc works to fight CSAM. That doesn’t change the concern: the system design is extensible and unverifiable. If a U.S. administration wanted to expand the scope (say, for terrorism, extremism, drugs, or IP enforcement), who exactly stops them? Not a German agency. Certainly not NCMEC.

> [On the obvious loophole of policy change] If the US government did do that. - It would first be challenged in the courts.

That is... optimistic. What legal mechanism exactly would allow a challenge to a (as an example) classified National Security Letter expanding the hash set? What court has even the standing to hear that? What precedent makes you believe such a challenge would surface in time?

> - They would not be able to hide the fact they have changed it.

Why not? The hashes are not reversible. The list is not public. The matches are not auditable. Gag orders are legal. What in this system ensures visibility or accountability?

> - This would lead to service providers not assisting with the corrupted CSAM. - As this is a worldwide initiative the rest of the world can just disconnect the US from the CSAM until what is put in is confirmed.

The Apple proposal is not a worldwide initiative, but a US-driven proposal involving a handful of US orgs. EU involvement in the whole issue has been comparatively lacking and is often dependent on US lobbying and funding. The idea that the world could or would opt out assumes a degree of transparency and technical independence that simply does not exist on this planet right now.

If you want to argue that the system is technically robust against political misuse, then please do. If there are decent guardrails in place, I'd really truly do like to know about them. But so far, it mostly reads like a wish list.


> Realistically if countries want to read encrypted messages, they can already do so. Some do too. The fact that the EU is debating it is a good thing.

I agree that the discussion is evolving the bill every time and there are always good amounts of feedback and comments.

It’s a bit annoying when tech websites don’t always update themselves with the latest changes, just labelling it ChatControl doesn’t mean it’s the same policy that was discussed 5 years ago. It makes for good click bait titles, but the technical nuances are missing.

For example, one would be interested to read a comparison between the “privacy” of a tool matching photos against a database of signatures vs. say Apple’s performative privacy in the Photos app or the iCloud + chatGPT/Apple Intelligence mix.


> Ironically when Apple introduced their solution it was actually better than what we have now. It was interesting to watch people lose their minds because they didn't understand how the current or proposed system worked.

What, the cloud scanning of user photos was a good idea for you? The private companyt deciding what is good or bad idea? The automated surveillance that could lead to people wrongfully accused idea?

> f it didn't get flagged then it stayed encrypted on the cloud and no one could look at it.

If Apple can decrypt your data when they find a match, they can decrypt ALL your data. Who says it will be used for good? Do you trust a private company this much?


> What, the cloud scanning of user photos was a good idea for you?

That is what is happening before Apples suggestion and is still happening.

> The automated surveillance that could lead to people wrongfully accused idea?

A hash scan is perfectly fine. It can tell you nothing about what is in your file except that if it matches another file that they know is CP.

Even then a flagged item has to be reviewed by law enforcement in case of a mistake and a single file is normally not enough to convict.

The chance is very slim of a mismatch. Facebook for example report a 1 in 50 billion chance of a mismatch.

To put that in context. The chance of a miss is 1 photo every 10 years across all users of facebook (approx 3 billion active users).

> If Apple can decrypt your data when they find a match, they can decrypt ALL your data.

Again. This is what is happening now for ALL service providers.

Apples suggestion was that if a file wasn't flagged it could only be decrypted by the owners device and nothing else. Not even Apple.


Are you OK with private companies basically playing the police with your data?


Let me give you a better answer to your question.

Yes I am OK with how CSAM works.

1. It is not owned by a private company.

2. Hash checking requires a 1:1 match to be flagged.

3. Any match is reviewed by law enforcement to confirm it matches what is recorded in the CSAM. This is checking your file against a descriptive record of what the file is.

4. The chance of a mismatch is so remote that its not even an issue for me. Even if you do get a mismatch it is a human that reviews it.

5. To submit a file to CSAM requires a lengthy detailed process where multiple humans review and approve before creating the Hash.

6. Every hash has a chain of custody. So if in the unlikely chance of something else being put into CSAM, you can see all the people that interacted with the system to put that hash in.

7. Service providers can be sued for content they submitted, so they have a prerogative to ensure what goes in is valid.

This process has been in place for 15 years or so.


Does that mean your code is annotated with 300+ instances of `#[allow(clippy::unwrap_used)]` et al?


It was the first time I set it up, then I went through every single instance and refactored with the appropriate choice. It wasn't as tedious as you might imagine, and again, I really don't have the option of letting my game crash.

I think the only legitimate uses are for direct indexing for tile maps etc. where I do bounds checking on two axes and know that it will map correctly. to the underlying memory (but that's `clippy::indexing_slicing`, I have 0 `clippy::unwrap_used` in my codebase now).

If you begin a new project with these lints, you'll quickly train to write idiomatic Option/Result handling code by default.


Cars break too, but roads for cars are typically constructed such that you won't go off a cliff or under a heavier vehicle like a train even if it happens. I believe GP's point is that cyclists' safety in failure scenarios is often not similarly accounted for.


It sounds like you should read the docs. It's just a subject-specific abbreviation, not an advertising trick.


but it is false advertising when it's used all over the internet with: rust is safe! telling the whole world to rtfm for your co-opting of the generic word "safe" is like advertisers telling you to read the fine print: a sleazy tactic.


It's not that either, and you are validating the GP's point. Rust has a very specific 'unsafe' keyword that every Rust developer interpret implicitly and instinctively as 'potentially memory-unsafe'. Consequently, 'safe' is interpreted as the opposite - 'guaranteed memory-safe'. Using that word as an abbreviation among Rust developers is therefore not uncommon.

However while speaking about Rust language in general, all half-decent Rust developers specify that it's about memory safety. Even the Rust language homepage has only two instances of the word - 'memory-safety' and 'thread-safety'. The accusations of sleaziness and false accusations is disingenuous at best.


I believe the post you're replying to is making a joke, referencing a classic complaint/meme that "Linux is only free if your time has no value".


The thing that makes the joke funny is that it truly has reversed.

Fedora KDE spin has been a less painful experience than Windows 11 in every way, for me.


Indeed, and so has dartos' point. Not even hardware support is a sure thing on windows any longer. And I'm not talking exotic things, just your random enterprise HP laptop.


Are you saying HP is building laptops that ship with Windows that lack drivers provided by HP?


I've had two issues, with two Elitebooks, one 845g8 (amd) and one 840g8 (intel). These are ~2020 models IIRC, which I had at the end of 2020 for the amd and early 2021 for the intel.

They both shipped with Windows 10, which I didn't use for any length of time, since I daily drive Linux. I updated them both to the latest Windows 11 available at the time, with a fresh install, and installed all the drivers from HP.

With the AMD, fresh out of the box, there was some issue with the backlight. The screen would be very dim, even while set all the way up. I initially thought there was some hardware issue, but booting into the BIOS burned my eyes, so the hardware was fine. I didn't really use it for any length, so don't know how that install fared otherwise. After the reinstall, the backlight worked as expected. But, for around half a year, the webcam wouldn't be detected. It suddenly started working after some update or other. This machine can also not sleep for any length of time under Windows. It will usually reboot by itself after a while, and Windows has no idea why (event log says "unexpected shutdown"). Less often, though, it will just get stuck when attempting to wake up (fans full tilt, screen off). The hardware was otherwise in good working condition, since everything worked under Linux since day 1. Yes, including sleep and audio.

With the Intel, the backlight seemed fine initially. I also didn't spend much time with it as delivered and reinstalled it. First off, Windows didn't detect the touchpad, nor the trackpoint, nor the Wifi. I did have a mouse lying around which worked (fortunately I don't care for BT peripherals), but then would have had to jump through some hoops to convince it to not insist on connecting to some server. Fortunately, it was going to be domain-joined so it left me be. Then the fun started. With the HP usb-c dock, the external screen would usually not be recognized at 4k@60 unless I did a plug-unplug-replug at the right time. Sleep would also be unreliable, sometimes waking up to a blank screen and fans going full tilt, sometimes to a garbled image. The output thing has been fixed after a while by using the drivers from Intel's website, but Windows would helpfully update them every other day to the borked ones. In the end, the drivers supplied by Windows have caught up, but my other random Chinese dock still won't output 4k@60. Sleep also mostly works fine now. Never had any issue on Linux. The Chinese dock works fine on Linux, and also on Windows with the AMD laptop.


Judging by the state of bluetooth and compatibility with their own docks, that might literally be the case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: