> never going to be updated, so now we're stuck with it.
Try going to any 1998 web page in a modern browser... It's generally so broken so as to be unusable.
As well as every page telling me to install flash, most links are dead, most scripts don't run properly (vbscript!?), tls versions now incompatible, etc.
We shouldn't put much effort into backwards compatibility if it doesn't work in practice. The best bet to open a 1998 web page is to install IE6 in a VM, and everything works wonderfully.
The vast majority of pages from 1998 work fine today. VBscript was always a tiny minority of scripting. And link rot is an undeniable problem but that’s not an issue with the page itself.
You’re unlikely to find a 1998-era Web page still running a 1998-era SSL stack. SSL was expensive (computationally and CA-cartel-ically), so basically banks and online shopping would have used SSL back then.
The ISS is a good example of a fully isolated environment. No new bacteria or viruses arrive there apart from spacecraft arrivals.
I've been curious for a while what human health would look like if there was a small group of people isolated for many decades. Would they effectively be disease free after the first few weeks?
As well as removing flu and colds, might it also reduce things like heart disease and Alzheimer's which we have weak evidence are linked to transmissible diseases?
The downside to doing that is that their immune system would be weak in the end. We survive cold and flu because we have had them before, but someone going many years without the yearly viruses would get hit 100x harder, even potentially dying.
So this is almost certainly redaction by the journalists?
It is disappointing they didn't mark those sections "redacted", with an explanation of why.
It is also disappointing they didn't have enough technical knowhow to at least take a screenshot and publish that rather than the original PDF which presumably still contains all kinds of info in the metadata.
Yes, the journalists did the redactions. The metadata timestamps in one of the documents show that the versions were created three weeks before the publication.
And to be honest, the journalists generally have done a great work on pretty much in all the other published PDFs. We've went through hundreds and hundreds of the published documents, and these two documents were pretty much the only ones which had metadata leak by a mistake revealing something significant (there are other documents as well with metadata leaks/failed redactions, but nothing huge). Our next part will be a technical deep-dive on PDF forensic/metadata analysis we've done.
I kinda assumed they wouldn't need any money because AI companies give them free credits to evaluate the models, and users ask questions and rate for free because they get to use decent AI models at no cost...
Beyond that there is coding up a web page, which as we all know can be vibe coded in a few hours...
They may not say "turn off bitlocker", but people definitely recommend backing up the recovery keys, and windows allows you to back up the key to microsoft because they know people won't actually back them up. Not sure if that happens by default, but they provide a variety of options for the recovery keys because there is definitely a non-zero chance you need them. There were several stories of this happening with the windows 10->11 upgrade push, where people were auto-updated and then scrambling to decrypt their hard drives.
The only way to protect against that is if a secure application boundary is enforced by the operating system. You can make it harder for other programs to uncover secrets by encrypting them, but any other application can reverse the encryption. I don't believe using the tpm meaningfully changes that situation.
I suspect that they do not actually contain the encryption key. It is more convenient if the disk encryption key is stored on the disk, but separately encrypted. You actually want to store the key multiple times, one for each unlock method. If the disk can be unlocked with a password, then you store the key encrypted using the password (or encrypted using the output of a key derivation function run on the typed password). If it can be unlocked with a smartcard, then you store a copy that is encrypted using a key stored in the card. When Bitlocker uses the TPM, it no doubt asks the TPM to encrypt the key and then stores that on the disk. To decrypt the disk it can ask the TPM to decrypt the stored key, which will only succeed if the TPM is in the same state that it was in when the key was encrypted.
The reason it's done this way is to allow multiple methods of accessing the disk, to allow the encryption password to be changed without having to rewrite every single sector of the disk, etc, etc. You can even “erase” the disk in one swift operation by simply erasing all copies of the key.
That is also required for any kind of key rotation to work, you're getting new key for a key, because alternative of using key directly would mean re-encrypting the whole drive when it changes and of course only having single one instead of multiple
Working backups are important regardless, but if you use a TPM then you’d better have your recovery keys somewhere convenient. I’m sure you can print them out and keep them in your wallet or something.
don't worry, ms pushes users to just put data on onedrive, they will start losing data far before machines actually die. We already had plenty of stories of that mess.
Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
That means responses can be far more tailored - it knows what your job is, knows where you go with friends, knows that when you ask about 'dates' you mean romantic relationships and which ones are going well or badly not the fruit, etc.
Eventually when they make it work better, open ai can be your friend and confident, and you wouldn't dump your friend of many years to make another new friend without good reason.
I really think this memory thing is overstated on Hacker News. This is not something that is hard to move at all. It's not a moat. I don't think most users even know memory exist outside of a single conversation.
Every single one of my non-techie friends who use ChatGPT rely heavily on memory. Whenever they try something different to it, they get very annoyed that it just doesn't "get them" or "know them".
Perhaps it'll be easy to migrate memories indeed (I mean there are already plugins that sort of claim to do it, and it doesn't seem very hard), but it certainly is a very differentiating feature at the moment.
I also use ChatGPT as my daily "chat LLM" because of memory, and, especially, because of the voice chat, which I still feel is miles better than any competition. People say Gemini voice chat is great, but I find it terrible. Maybe I'm on the wrong side of an A/B test.
This feels like an area Google would have an advantage though. Look at all of the data about you that Google has and it could mine across Wallet, Maps, Photos, Calendar, GMail, and more. Google knows my name, address, drivers license, passport, where I work, when I'm home, what I'm doing tomorrow, when I'm going on vacation and where I'm going, and whole litany of other information.
The real challenge for Google is going to be using that information in a privacy-conscious way. If this was 2006 and Google was still a darling child that could do no wrong, they'd have already integrated all of that information and tried to sell it as a "magical experience". Now all it'll take is one public slip-up and the media will pounce. I bet this is why they haven't done that integration yet.
I used to think that, too, but I don't think it's the case.
Many people slowly open up to an LLM as if they were meeting someone. Sure, they might open up faster or share some morally questionable things earlier on, but there are some things that they hide even from the LLM (like one hides thoughts from oneself, only to then open up to a friend). To know that an LLM knows everything about you will certainly alienate many people, especially because who I am today is very different from who I was five years ago, or two weeks ago when I was mad and acted irrationally.
Google has loads of information, but it knows very little of how I actually think. Of what I feel. Of the memories I cherish. It may know what I should buy, or my interests in general. It may know where I live, my age, my friends, the kind of writing I had ten years ago and have now, and many many other things which are definitely interesting and useful, but don't really amount to knowing me. When people around me say "ChatGPT knows them", this is not what they are talking about at all. (And, in part, it's also because they are making some of it up, sure)
We know a lot about famous people, historical figures. We know their biographies, their struggles, their life story. But they would surely not get the feeling that we "know them" or that we "get them", because that's something they would have to forge together with us, by priming us the right way, or by providing us with their raw, unfiltered thoughts in a dialogue. To truly know someone is to forge a bond with them — to me, no one is known alone, we are all known to each other. I don't think google (or apple, or whomever) can do that without it being born out of a two-way street (user and LLM)[1]. Especially if we then take into account the aforementioned issue that we evolve, our beliefs change, how we feel about the past changes, and others.
[1] But — and I guess sort of contradicting myself — Google could certainly try to grab all my data and forge that conversation and connection. Prompt me with questions about things, and so on. Like a therapist who has suddenly come into possession of all our diaries and whom we slowly, but surely, open up to. Google could definitely intelligently go from the information to the feeling of connection.
Maybe. I haven't really heard many of the people in my circles describing an experience like that ("opening up" to an LLM). I can't imagine *anyone* telling a general-purpose LLM about memories they cherish.
Do people want an LLM to "know them"? I literally shuddered at the thought. That sounds like a dystopian hell to me.
But I think Google has, or can infer, a lot more of that data than people realize. If you're on Android you're probably opted into Google Photos, and they can mine a ton of context about you out of there. Certainly infer information about who is important to you, even if you don't realize it yourself. And let's face it, people aren't that unique. It doesn't take much pattern matching to come up with text that looks insightful and deep, but is actually superficial. Look at cold-reading psychics for examples of how trivial it is.
Another data point: my generally tech savvy teenage daughter (17) says that her friends are only aware of AI having been available for last year (3 actually), and basically only use it via Snaphhat "My AI" (which is powered by OpenAI) as a homework helper.
I get the impression that most non-techies have either never tried "AI", or regard it as Google (search) on steroids for answering questions.
Maybe more related to his (sad but true) senility rather than lack of interest, but I was a bit shocked to see the physicist Roger Penrose interviewed recently by Curt Jaimungal, and when asked if he had tried LLMs/ChatGPT assumed the conversation was about the "stupid lady" (his words) ELIZA (fake chatbot from the 60's), evidentially never having even heard of LLMs!
My mom does. She's almost 60. She asks for recipes and facts, asks about random illnesses, asks it why she's feeling sad, asks it how to talk to her friend with terminal cancer.
I didn't tell her to download the app, nor she is a tech-y person, she just did on her own.
Exactly. I went through a phase of playing around with ESP32s and now it tries to steer every prompt about anything technology or electronics related back to how it can be used in conjunction with a microcontroller, regardless of how little sense it makes.
I agree. For me it's annoying because everything it generates is too tailored to the first stuff I started chatting with it about. I have multiple responsibilities and I haven't been able to get it to compartmentalize. When I'm wearing my "radiology research" support hat it assumes I'm also wearing my "MRI physics" hat and to weaves everything for MRI. It's really annoying.
It doesn't even change the responses a lot. I used ChatGPT for a year for a lot of personal stuff, and tried a new account with basic prompts and it was pretty much the same. Lots of glazing.
What kind of a moat is that? I think it only works in abusive relationships, not consumer economies. Is OpenAIs model being an abusive money grubbing partner? I suppose it could be!
If you have all your “stuff” saved on ChatGPT, you’re naturally more likely to stay there, everything else being more or less equal: Your applications, translations, market research . . .
I think this is one of the reasons I prefer claude-code and codex. All the files are on my disks and if claude or codex were to disappear nothing is lost.
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
Their 'memory' is mostly unhelpful and gets in the way. At best it saves you from prompting some context, but more often than not it adds so much irrelevant context that it over fits responses so hard that it makes them completely useless, specially in exploratory sessions.
It's certainly valuable but you can ask Digg and MySpace how secure being the first mover is. I can already hear my dad telling me he is using Google's ChatGPT...
I think an OpenAI paper showed 25% of GPT usage is “seeking information”. In that case Google also has a an advantage from being the default search provider on iOS and Android. I do find myself using the address bar in a browser like a chat box.
The memory is definitely sort of a moat. As an example, I'm working on a relatively niche problem in computer vision (small, low-resolution images) and ChatGPT now "knows" this and tailors its responses accordingly. With other chatbots I need to provide this context every time else I get suggestions oriented towards the most common scenarios in the literature, which don't work at all for my use-case.
That may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now. I asked ChatGPT to roast me again at the end of last year, and I was a bit taken aback that it had even figured out the broader problem I'm working on and the high level approach I'm taking, something I had never explicitly mentioned. In fact, it even nailed some aspects of my personality that were not obvious at all from the chats.
I'm not saying it's a deep moat, especially for the less frequent users, but it's there.
> may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now
I’m not saying it’s minor. And one could argue first-mover advantages are a form of moat.
But the advantage is limited to those who have used ChatGPT. For anyone else, it doesn’t apply. That’s different from a moat, which tends to be more fundamental.
Ah, I guess I've been interpreting "moat" narrowly, such as, keeping your competitors from muscling in on your existing business, e.g. siphoning away your existing users. Makes sense that it applies in the broader sense as well, such as say, protecting the future growth of your business.
Sounds similar to how psychics work. Observing obvious facts and pattern matching, except in this case you made the job super easy for the psychic because you gave it a _ton_ of information, instead of a psychic having to infer from the clothes you wear, your haircut, hygiene, demeanor, facial expression etc.
Yeah, it somewhat is! It also made some mistakes analogous to what psychics would based on the limited sample of exposure it had to me.
For instance, I've been struggling against a specific problem for a very long time, using ChatGPT heavily for exploration. In the roast, it chided me for being eternally in search of elegant perfect solutions instead of shipping something that works at all. But that's because it only sees the targeted chats I've had with it, and not the brute force methods and hacks I've been piling on elsewhere to make progress!
I'd bet with better context it would have been more right. But the surprising thing is what it got right was also not very obvious from the chats. Also for something that has only intermittent existence when prompted, it did display some sense of time passing. I wonder if it noticed the timestamps on our chats?
Notably, that roast evolved into an ad-hoc therapy session and eventually into a technical debugging and product roadmap discussion.
A programmer, researcher, computer vision expert, product manager, therapist, accountability partner, and more all in a package that I'd pay a lot of money if it wasn't available for free. If anything I think the AI revolution is rather underplayed.
I just learned Gemini has "memory" because it mixed its response to a new query with a completely unrelated query I had beforehand, despite making separate chats for them. It responded as if they were the same chat. Garbage.
I recently discovered that if a sentence starts with "remember", Gemini writes the rest of it down as standing instructions. Maybe go look in there and see if there is something surprising.
Its a recent addition. You can view them in some settings menu. Gemini also has scheduled triggers like "Give me a recap of the daily news every day at 9am based on my interests" and it will start a new chat with you every day at 9am with that content.
Yeah, all the politicians talking housing— the actual net governmental effect on housing is to massively constrain supply (quantity) in service of rather arbitrary qualitative standards. I’m all for “the building shouldn’t spontaneously collapse” standards, but … two acre lots? Restrictions on casting shadows? Accessory dwelling units? Nah. If the government wants to make a difference it should ban Euclidian zoning as it currently is practiced, full stop.
These qualitative standards are subjective. Everyone has something that they deem essential and others not so much. A lot of zoning regulations is the amalgamation of preferences by different people, because they try to satisfy the aesthetic preferences of too many people.
For example in New York City go take a walk in the streets next to hundred-year-old skyscrapers in downtown. It’s miserable to me. Now go walk alongside midcentury skyscrapers in midtown. It’s much better. The difference is entirely because of shadows cast by the skyscrapers. The visceral reaction from shadows is so strong to me that I am wholeheartedly supporting restrictions on casting shadows.
Now on two-acre lots. That’s not something I care about. Even 0.1 acres of land is too much maintenance for me. But it will be non-negotiable for someone else.
At one end of the extreme unlimitedly cheap housing leads to slums, ghettos, and crime. Such housing is not worth building because it costs the city more in reputation and police force. Take a look at Kowloon walled city for an example of cheap construction without regulations.
So there has to be a line. Different people just draw the line differently. For me the issue on shadows is that these fancy buildings make the streets next to them dark and ghetto-like.
"We expect all states to expand housing supply by 2% per year. States which fail to meet this standard will pay $1000 per unbuilt house per year, as a subtraction from other federal funding. States may trade house building with other states to achieve this.".
Making this decision as a politician in this country is death to your career though; how could we incentivize our leaders to bite the hand that elects them?
If your local mayor decides to allow a tower block to be built next to your house, you might be pissed.
But if the president says 'we're gonna take away mayors powers to restrict housebuilding', you won't be pissed yet.... And when a builder comes along to build later it'll be too late.
I think its a bit more complicated than this. Disputed outcomes are decided by votes from UMA token holders, which are anonymously owned. I remember reading a theory (with evidence, can't find the article now) that the venn diagram between Polymarket owners/stakeholders and UMA whales is close to a circle.
For the same people accept Tether's claims of solvency even though they refuse audits and obviously lie about ownership of various assets. For the same reason people ignore wash trading. For the same reason people continue using Sam-coins even after FTX's implosion.
Because A) they're not paying attention, B) they're in denial and C) because they think they can profit in the short term before it collapses, or that the odds are in their favor to profit despite the risks.
I don't think this government particularily cares about convincing people about anything, they are just doing whatever they want public opinion be damned.
I think you're hitting all the various bot walls. They sometimes deliberately break and show stale content so bots and scrapers don't know they're blocked. If you're logged in everything just works.
Because I learned JS before ECMAScript 6 was widely supported by browsers and haven't written a ton of it targeting modern browsers. You're right that it's unnecessary.
Try going to any 1998 web page in a modern browser... It's generally so broken so as to be unusable.
As well as every page telling me to install flash, most links are dead, most scripts don't run properly (vbscript!?), tls versions now incompatible, etc.
We shouldn't put much effort into backwards compatibility if it doesn't work in practice. The best bet to open a 1998 web page is to install IE6 in a VM, and everything works wonderfully.
reply