Anything that could put Musk or Trump in a negative light is immediately flagged here. Discussions about how Grok went crazy the other day was also buried.
If you want to know how big tech is influencing the world, HN is no longer the place to look. It's too easy to manipulate.
Anything that triggers the flamewar detector gets down-weighted automatically. Those two trigger discussion full of fast poorly thought out replies and often way more comments than story upvotes, so stories involving them often trip that detector. On top of that, the discussion is usually tiresome and not very interesting, so people who would rather see more interesting things on the front page are more likely to flag it. It's not some conspiracy.
I think the intent is to de-amplify topics that produce shallow responses (the kind that can be quickly made and piled on). I still see plenty of those rise to the top of the feed though, so it's more of a "turn down the volume" than "mute".
Perhaps it’s not a conspiracy so much that denying technology’s broader context provides a bit of comforting escapism from the depressing realities around us. Unfortunately I think this escapism, while understandable, may not always be optimal either, as it contributes to the broader issues we face in society by burying them.
Even looking around the thread there's evidence that lots of other people can't even have the kind of meta-level discussion you're looking for without descending into the ideological-battle thing.
I might be more inclined to believe you if Elon-related posts were being flagged across the board, but they're not. Just yesterday, the Grok 4 launch video was on the front page, and the associated comments were full of flamewar content. Yet that post didn't get flagged -- presumably because it was a launch video favorable to Elon's interests.
There's a clear bias -- either on the part of the flaggers or on the part of HN itself -- in what gets flagged. If it has even a hint of criticism of Elon, it gets flagged. That makes this forum increasingly useless for discussion of obviously important tech topics (e.g., why one of the frontier AI models is spouting Nazi rhetoric).
On both of those cases there tends to be an abundance of comments denigrating either character in unhinged, Reddit-style manner.
As far as I am concerned they are both clowns, which is precisely why I don't want to have to choose between correcting stupid claims thereby defending them, and occasionally have an offshoot of r/politics around. I honestly would rather have all discussion related to them forbidden than the latter.
I don't think it takes any manipulation for people to be exhausted with that general dynamic either.
It’s not just Musk — as soon as you hit a certain level, it basically becomes impossible to fail. I’ve noticed that even if a senior leader is ousted from a company in disgrace, another company will invariably pick that person up fairly quickly.
The perverse thing is that betting on the irrational behavior of other investors seems paradoxically rational at this point. Just, don't get caught holding the bag.
investors are simply off-setting the losses to the next investor they sell to. Musk brand is still valuable and his bubble keeps growing. It is only the investors present when the bubble bursts that'll be at a loss.
You only need to grow your investment, sell it off to the next person, and exit before it happens
I still dont understand why elmo is not being investigated for fraud for all these obscenely unreal claims and promises he has made over the years. this is a clear and cut definition of fraud, for which "female steve jobs" and another e-trucking clown CEO are doing their time. why not elmo?
OMG, Elon’s simps are so fucking delusional. No, being wrong about a deadline is not fraud. Selling something with a promised deadline attached to it, and being wrong about that deadline for 6+ years is fraud.
Well, Elon Musk is holding AI development ransom unless he's granted sufficient shares in Tesla to take his holdings above 25%. So I suppose they can give him billions to solve for self-driving the "easy" way, or the "hard" way.
How this is not a conflict of interest, I do not know; then again it may explain why Elon wants to reincorporate Tesla in Texas - away from Delaware courts
Unique data set. And Elon. And with Elon, comes a great set of talent. From https://x.ai/about
> Our team is led by Elon Musk, CEO of Tesla and SpaceX. Collectively our team contributed some of the most widely used methods in the field, in particular the Adam optimizer, Batch Normalization, Layer Normalization, and the discovery of adversarial examples. We further introduced innovative techniques and analyses such as Transformer-XL, Autoformalization, the Memorizing Transformer, Batch Size Scaling, μTransfer, and SimCLR. We have worked on and led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4.
Does it, though? That was probably true pre-X™, but it seems like the primary selection metric has gone from “competence” to “doesn’t ever contradict Elon”
But we know from Google that unless you can definitively solve the "is this sentence real or a joke" datasets like Twitter, Reddit etc are going to be more trouble than they are worth.
And Elon's recent polarising nature and the callous nature with which he disbanded the Tesla Supercharger team means that truly talented people aren't going to be as attracted to him as in his early days. They are only going to be there for the money.
The datasets should not be used for knowledge but to train a language model.
Using it for knowledge is bonkers.
Why not buy some educational textbook company and use 99.9% correct data?
Oh and use RAG while you are at it so you can point to the origin of the information.
The real evolution still has to come though, we need to build a reasoning engine (Q*?) which will just use RAG for knowledge and language models to convert its thought into human language
You use formal verification for logic and rags for source data.
In other words - say you have a model that is semi-smart, often makes mistakes in logic, but sometimes gives valid answers. You use it to “brainstorm” physical equations and then use formal provers to weed out the correct answer.
Even if the llm is correct 0.001% of the time, it’s still better than the current algorithms which are essentially brute forcing.
I’m still confused as to the value of training on tweets though in that scenario?
If you need to effectively provide this whole secondary dataset to have better answers, what value do the tweets add to training other than perhaps sentiment analysis or response stylization?
I still fondly remember the story an OpenAI rep told about fine-tuning with company slack history. Given a question like "Can you do this and that please." the system answered (after being fine-tuned with said history) "Sure, I'll do it tomorrow." Teaches you to carefully select your training data.
It's 6B down the drain. Saying grok 1.5 is competitive is a joke, if it was any good it would be ranked well in chatbot arena (https://chat.lmsys.org/). Elon is a master in hyping underperforming things and this is no exception.
No, there is no ranking for Grok. It’s not participating.
It would be hard to judge rate of improvement at this point, since the company has only been around for 1.25 years, and grok 1.5 is yet to be released for general access.
You really think investors like sequoia and a16z are dumb enough to fall for Elon hyping things up? They know who he is and They’ve seen him operate at levels basically no other entrepreneur can snd are betting on that
> You really think investors like sequoia and a16z are dumb enough to fall for Elon hyping things up?
a16z invested $350 in Adam Neumann's real estate venture - after WeWork. VCs will absolutely knowingly invest on hype if they think it's going to last long enough for them to cash out with great returns.
I mean, he can try. The world already has a number of AI corporations headed up by totalitarian megalomaniacs though, the market may eventually reward some other course of action.
The real bull case - Elon doesn't kowtow to mentally ill basement nerds and the media/politicians trying to not lose power.
Can you image someone running in to tell Elon the fat nerds on HN are in a tizzy about Grok telling people to eat rocks?
Other bull case he's obviously silo-ing Twitter for unique training data. Reddit can only ask nicely you don't train off them.
Twitter with a good AI could become quite strong. I'm not as bullish on this, but... Twitter is all the cutting edge news. ChatGPT was happy to be years out of date.
No one cares Russia has finally manned up and launched a tactical nuke 24 hours after it happens, something new will be trending. This is Twitters strength, to the minute data. One of the AI's will have to specialize in this.
I would be asking the same question if another company formed in the past year raised $6B to train LLMs. For example, Mistral raised a significantly smaller round at a much lower valuation. Just trying to learn how others see this.
My parents in their 60s have discovered that Reddit is a great source of research to get real humans talking about a variety of topics (basically what Quora aimed to be but never was). I strongly feel they're just getting started.
Maybe. As a long time Reddit user I’m looking for new alternatives as it’s just getting more and more meme-y, negative snark and groupthink. This wsb thing was a perfect example. If you try to do any critical thinking you’ll get banned. Just post memes and move on.
a fair amount of critical thinking does go on at WSB. a good number of people there are actually investors who just role play as a "memelord". how else do you think the gamestop shorts were found in the first place?
Lately I have been adding "reddit" to all the questions I ask google because I know some "real" person on reddit will have a relatable answer. I too feel this is untapped potential.
Very good point. That is exactly why I stop visiting known network sites for answers. Especially when its topics like car repair, or home repair tips etc... It always goes to a fake SEO page with no real useful information.
So then it's either reddit or some very niche forums, but forums are hard to use.
I've been adding `site:reddit.com` to most of my searches for years, but I noticed this past year that Reddit threads rank highly on many searches even without this operator. That's huge.
I do that as well. If the Reddit search was better I’d happily start the search from there, but it does feel like a major failing on Google’s side when it can only provide me with SEO spam.
Parts (very small parts) are what Stack Overflow should be. The moderation and closing of questions (marking as duplicate or similar) on SO is so officious and malicious that it’s easy to just avoid it.
The right subs for ones level are really really helpful.
I had read that, and so the 'cost' is the marginal cost of "support", "moderation", "legal" (assuming no features QA is probably not huge). So is it possible to operate a feature that has user generated content at at a positive margin? One without compromising privacy or incurring an untenable liability? I accept that this may not be possible, but it seems like that is where the engineering work is.
And yes, I know product managers are "we need new stuff" to push all the time. And sales is constantly trying to sell something you don't make. But those are execution issues that good management will moderate.