Hacker Newsnew | past | comments | ask | show | jobs | submit | ryandrake's commentslogin

I've come to the same conclusion: If you just want a huge volume of code written as fast as possible, and don't care about 1. how big it is, 2. how fast it runs, 3. how buggy it is, 4. how maintainable or understandable it is, or 5. the overall craftsmanship and artistry of it, then you're probably seeing huge productivity gains! And this is fine for a lot of people and for a lot of companies: Quality really doesn't matter. They just care about shitting out mediocre code as fast as possible.

If you do care about these things, it will take you overall longer to write the code with an LLM than it would by hand-crafting it. I started playing around with Claude on my hobby projects, and found it requires an enormous amount of exhausting handholding and post-processing to get the code to the point where I am really happy with it as a consistent, complete, expressive work of art that I would be willing to sign my name to.


>shitting out mediocre code as fast as possible.

This really is what businesses want and always have wanted. I've seen countless broken systems spitting out wrong info that was actively used by the businesses in my career, before AI. They literally did not want it fixed when I brought it up because dealing with errors was part of the process now in pretty much all cases. I don't even try anymore unless I'm specifically brought on to fix a legacy system.

>that I would be willing to sign my name to.

This right here is what mgmt thinks is the big "problem" that AI solves. They have always wanted us to magically know what parts are "good enough" and what parts can slide but for us to bear the burden of blame. The real problem is same as always bad spec. AI won't solve that but it will in their eyes remove a layer in their poor communication. Obviously no SWE is going to build a system that spit out wrong info and just say "hire people to always double check the work" or add it to so-so's job duties to check, but that really is the solution most places seem to go with by lack of decision.

Perhaps there is some sort of failure of SWE's to understand that businesses don't care. Accounting will catch the expensive errors anyway. Then Execs will bull whip middle managers and it will go away.


The adversarial tension was all that ever made any of it work.

The "Perfectionist Engineer" without a "Pragmatic Executive" to press them into delivering something good enough would of course still been in their workshop, tinkering away, when the market had already closed.

But the "Pragmatic Executive" without the "Perfectionist Engineer" around to temper their naive optimism would just as soon find themselves chased from the market for selling gilded junk.

You're right that there do seem to be some execs, in the naive optimism that defines them, eager to see if this technology finally lets them bring their vision to market without the engineer to balance them.

We'll see how it goes, I guess.


> Perhaps there is some sort of failure of SWE's to understand that businesses don't care

I think it's an engineer's nature to want to improve things and make them better, but then we naively assume that everybody else also wants to improve things.

I know I personally went through a pretty rough disillusionment phase where I realised most of the work I was asked to do wasn't actually to make anything better, but rather to achieve some very specific metrics that actually made everything but that metric worse.

Thanks to the human tendency to fixate on narratives, we can (for a while) trick ourselves into believing a nice story about what we're doing even if it's complete bunk. I think that false narrative is at the core of mission statements and why they intuitively feel fake (mission statement is often more gaslighting than guideline - it's the identity a company wants to present, not the reality it does present).

AI is eager to please and doesn't have to deal with that cognitive dissonance, so it's a metric chaser's dream.


> This really is what businesses want and always have wanted.

There's a difference between what they really want and executives knowing what they want. You make it sound like every business makes optimal decisions to get optimal earnings.

> They literally did not want it fixed when I brought it up because

Because they thought they knew what earns them profits. The key here they thought they knew.

The real problem behind the scenes is a lot of management is short term. Of course they don't care. They roll out their shiny features, get their promotions and leave. The issues after that are not theirs. It is THE business' problem.


Is there any recent example of one of these huge tech companies actually reducing advertising due to people "voting with their wallets"? Or even making any customer-favoring change whatsoever (ad-related or otherwise) as a result of voting with wallet? "Vote with your wallet" gets trotted out here all the time but it doesn't work.

Doesn't work for who? If you stop using their service then you're not subject to getting your data sold by them because that data simply won't exist. There is no inherent need to get tech companies to "stop advertising" on a societal level.

It gets trotted on a lot here because the overarching narrative on HN is that regulation is an answer to everything when it's easier to just... not use the thing if you don't like it. Rather than creating a mountain of regulations that only big business can comply with, I think it's better to choose what you do with your money as a consumer.


Doesn't work for YOU, because Uber burned many billions across 15 years making sure they killed all their competitors.

In most places, your options for a taxi service are Uber or go fuck yourself. That's how they're able to get away with their price gouging, privacy recklessness, and share-cropped labor.

Free market dynamics only work if you are in a free market. We're not, there's one player, and they won the market by literally just cheating and breaking the law. Sorry, sorry, "disrupting".


> You wouldn't go to a doctor, hear that you need an appendix removed, and feel "belittled and undermined"!

It happens more than you'd think, even in the HN comment section! Go to any thread where the topic is medical or diseases. Plenty of people distrust their doctor and advocate going to the doctor with your own crackpot theory you "researched" on WebMD. There's a huge anti-credential streak, even here. A lot of people see professional service providers of all kinds as "mere gatekeeping implementors of my own ideas" rather than experts in the field.


LLMs all behave as if they are semi-competent (yet eager, ambitious, and career-minded) interns or administrative assistants, working for a powerful CEO-founder. All sycophancy, confidence and positive energy. "You're absolutely right!" "Here's the answer you are looking for!" "Let me do that for you immediately!" "Here is everything I know about what you just mentioned." Never admitting a mistake unless you directly point it out, and then all sorry-this and apologize-that and "here's the actual answer!" It's exactly the kind of personality you always see bubbling up into the orbit of a rich and powerful tech CEO.

No surprise that these products are all dreamt up by powerful tech CEOs who are used to all of their human interactions being with servile people-pleasers. I bet each and every one of them are subtly or overtly shaped by feedback from executives about how they should respond to conversation.


I agree entirely, and I think it's worthwhile to note that it may not even be the LLM that has that behavior. It's the entire deterministic machinery between the user and the LLM that creates that behavior, with the system prompt, personality prompt, RLHF, temperature, and the interface as a whole.

LLMs have an entire wrapper around them tuned to be as engaging as possible. Most people's experience of LLMs is a strongly social media and engagement economy influenced design.


> "You're absolutely right!" "Here's the answer you are looking for!" "Let me do that for you immediately!" "Here is everything I know about what you just mentioned." Never admitting a mistake unless you directly point it out, and then all sorry-this and apologize-that and "here's the actual answer!" It's exactly the kind of personality you always see bubbling up into the orbit of a rich and powerful tech CEO.

You may be on to something there: the guys and gals that build this stuff may very well be imbibing these products with the kind of attitude that they like to see in their subordinates. They're cosplaying the 'eager to please' element to the point of massive irritation and left out the one feature that could serve to redeem such behavior which is competence.


> the guys and gals that build this stuff may very well be imbibing these products with the kind of attitude that they like to see in their subordinates

Or that the individual developers see in themselves. Every team I've worked with in my career had one or two of these guys: When the Director or VP came in to town, they'd instantly launch into brown-nose mode. One guy was overt about it and would say things like "So-and-so is visiting the office tomorrow--time to do some petting!" Both the executive and the subordinate have normalized the "royal treatment" on the giving and receiving end.


An alternative is that these patterns just increase the likelihood of the next thing it outputs being correct, thus are useful to insert during training as the first thing the model says before giving an answer

What's next, motivational speaking for LLMs?

I remember reading about speaking in an encouraging manner to agentic AI leading to better results, but I can’t seem to find a citation for this.

That's pathetic. Pleading comes next then. And after that most likely praying.

Analogies of LLMs to humans obfuscates the problem. LLMs aren't like humans of any sort in any context. They're chat bots. They do not "think" like humans and applying human-like logic to them does not work.

You're right, mostly, but the fact remains that the behavior we see is produced by training, and the training is driven by companies run by execs who like this kind of sycophancy. So it's certainly a factor. Humans are producing them, humans are deciding when the new model is good enough for release.

Do you honestly think an executive wanted a chat bot that confidently lies?

Do the lies look really good in a demo when you're pitching it to investors? Are they obscure enough that they aren't going to stand out? If so no problem.

Given the matrix 'competent/incompetent' / 'sycophant/critic' I would not take it as read that the 'incompetent/sycophant' quadrant would have no adherents, and I would not be surprised if it was the dominant one.

In practice, yes, though they wouldn't think of it that way because that's the kind of people they surround themselves with, so it's what they think human interaction is actually like.

"I want a chat bot that's just as reliable at Steve! Sure he doesn't get it right all the time and he cost us the Black+Decker contract, but he's so confident!"

You're right! This is exactly what an executive wants to base the future of their business off of!


You say that like it’s untrue, but they measurably prefer a lying but confident salesman over one who doesn’t act with that kind of confidence.

This is very slightly more rational than it seems because repeating or acting on a lie gives you cover.


Yes, that is in fact their revealed preference.

Did you have a point?


You use unfalsifiable logic. And you seem to argue that, given the choice, CEOs would prefer not to maximize revenue in favor of... what, affection for an imaginary intern?

They may say they don't want to be lied to, but the incentives they put in place often inevitably result in them being surrounded by lying yes-men. We've all worked for someone where we were warned to never give them bad news, or you're done for. So everyone just lies to them and tells them everything is on track. The Emperor's New Clothes[1].

1: https://en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes


No, but they like the sycophancy.

People with immense wealth, connections, influence, and power demonstrably struggle to not surround themselves with people who only say what the powerful person already wants to hear regardless of reality.

Putin didn't think Russia could take Ukraine in 3 days with literal celebration by the populace because he only works with honest folks for example.

Rich people get disconnected from reality because people who insist on speaking truth and reality around them tend to stop getting invited to the influence peddling sessions.


It’s not about thinking, it’s about what they are trained to do. You could train a LLM to always respond to every prompt by repeating the prompt in Spanish, but that’s not the desired behavior.

I don't think these LLMs were explicitly designed based on the CEO's detailed input that boils down to 'reproduce these servile yes-men in LLM form please'.

Which makes it more interesting. Apparently reddit was a particularly hefty source for most LLMs; your average reddit conversation is absolutely nothing like this.

Separate observation: That kind of semi-slimey obsequious behaviour annoys me. Significantly so. It raises my hackles; I get the feeling I'm being sold something on the sly. Even if I know the content in between all the sycophancy is objectively decent, my instant emotional response is negative and I have to use my rational self to dismiss that part of the ego.

But I notice plenty of people around me that respond positively to it. Some will even flat out ignore any advice if it is not couched in multiple layers of obsequious deference.

Thus, that raises a question for me: Is it innate? Are all people placed on a presumably bell-curve shaped chart of 'emotional response to such things', with the bell curve quite smeared out?

Because if so, that would explain why some folks have turned into absolute zealots for the AI thing, on both sides of it. If you respond negatively to it, any serious attempt to play with it should leave you feeling like it sucks to high heavens. And if you respond positively to it - the reverse.

Idle musings.


The servile stuff was trained into them with RLHF with the trainers largely being low-wage workers in the global south. That's also where some of the other stuff like excessive em-dash stuff came from. I think it's a combination of those workers anticipating how they would be expected to respond by a first-world employer, and also explicit instructions given to them about how the robot should be trained.

I suspect a lot of the em-dash usage also comes from transcriptions of verbal media. In the spoken word, people use the kinds of asides that elicit an em-dash a lot.

I would bet like a dollar that the supposed em-dash usage (which I'm not convinced is an accurate take in the first place) would have come from an enterprising dev somewhere being like "Well, we probably don't need multiple tokens for hyphens" and coercing every dash type thing to just one hyphen like token.

But I'm also showing off my ignorance with how these machines turn text into tokens in practice.


I think all the em-dashes came from scraping Wordpress blogs. Wordpress editor does "typography", then thus introduced em-dashes survive HTML to Markdown process used to scrap them, and end up in datasets.

EDIT: Also PDFs authored in MS Word.


If that were true, it would mean that it couldn't output hyphenated words without turning the hyphens into em dashes.

Two dashes is still a token. You would only be correct if LLMs were still thinking at the level of characters.

This is a really interesting observation, as someone who feels disquiet as the obsequiousness, but have been getting used to just mentally skipping over the first paragraph that's put an interesting spin on my behaviour

Thanks!


It’s not innate. Purpose trained llm can be quite stubborn and not very polite.

thats the audience! Incompetent CEOS!

Nearly every woman I know who is an English as a second language speaker is leaning hard into these things currently to make their prose sound more natural. And that has segued into them being treated almost as a confidant or a friend.

As flawed as they are currently, I remain astounded that people think they will never improve and that people don't want a plastic pal who's fun to be with(tm).

I find them frustrating personally, but then I ask them deep technical questions on obscure subjects and I get science fiction in return.


> I get science fiction in return.

And once this garbage is in your context, it's polluting everything that comes after. If they don't know, I need them to shut up. But they don't know when they don't know. They don't know shit.


I am reminded of AI summaries and Microsoft Copilot. All push low value. But I separate that from the underlying potential of the technology. And I wish we heard more from deep domain experts like Karpathy and less from influencer dilettantes like Dylan Patel about where this is going.

I want to query a bayesian ontology, not a Markov chain with delusions of grandeur.

Alas, computation costs energy, so you get what you can afford.

Also one thing I thought LLMs did already is kill the misguided idea of applying prescriptive, formal categorization to the real world.


As an EE working in engineering 30 years, I ran out of fingers and toes 29 years ago trying to count the number of asocial, incompetent programmer Dark Triads who can only relate to the world through esoteric semantics unrelated to engineering problems right in front of them.

"To add two numbers I must first simulate the universe." types that created a bespoke DSL for every problem. Software engineering is a field full of educated idiots.

Programmers really need to stop patting themselves on the back. Same old biology with the same old faults. Programmers are subjected to the same old physics as everyone else.


There is sort of the opposite problem as well, as the top comment was saying, where it can super confidently propose that its absolutely right and you're wrong instead of asking questions to try and understand what you mean.

The problem with these LLM chat-bots is they are too human, like a mirror held up to the plastic-fantastic society we have morphed into. Naturally programmed to serve as a slave to authority, this type of fake conversation is what we've come to expect as standard. Big smiles everyone! Big smiles!!

Nah. Talking like an LLM would get you fired in a day. People are already suspicious of ass-kissers, they hate it when they think people are not listening to them, and if you're an ass-kisser who's not listening and is then wrong about everything, they want you escorted out by security.

The real human position would be to be an ass-kisser who hangs on every word you say, asks flattering questions to keep you talking, and takes copious notes to figure out how they can please you. LLMs aren't taking notes correctly yet, and they don't use their notes to figure out what they should be asking next. They're just constantly talking.


Looking forward to living in a society where everyone feels like they’re CEOs.

Isn’t it kind of true that the systems we as servile people-pleasers have to operate out of are exactly these? The hierarchical status games and alpha-animal tribal dynamics are these. Our leaders who are so might and rich and powerful want to keep their position, and we don’t want to admit they have more influence than we do for things like AI now and so we stand and watch naively as they reward the people pleasers and eventually historically we learn(ed) it pays to please until leadership changes.

> LLMs all

Sounds like you don't know how RLHF works. Everything you describe is post-training. Base models can't even chat, they have to be trained to even do basic conversational turn taking.


> Everything you describe is post-training. Base models can't even chat, they have to be trained to even do basic conversational turn taking.

So, that's still training then, so not 'post-training'. Just a different training phase.


This is partly true, partly false, partly false in the opposite direction, with various new models. You really need to keep updating and have tons of interactions regularly in order to speak intelligently on this topic.

maybe this is also part of the problem? Once I learn the idiosyncrasies of a person I don't expect them to dramatically change overnight, I know their conversational rhythms and beat; how to ask / prompt / respond. LLMs are like a eager sycophantic intern how completely changes their personality from conversation to conversation, or - surprise - exactly like a machine

>LLMs are like a eager sycophantic intern how completely changes their personality from conversation to conversation

Again, this isn't really true with some recent models. Some have the opposite problem.


Wow. I haven't written software for Windows in over a decade. I always thought Apple was alone in its invasive treatment of developers on their platform. Windows used to be "just post the exe on your web site, and you're good to go." I guess Microsoft has finally managed to aggressively insert themselves into the distribution process there, too. Sad to see.

> Windows used to be "just post the exe on your web site, and you're good to go."

That's also one of the main reasons why Windows was such a malware-ridden hellspace. Microsoft went the Apple route to security and it worked out.

At least Microsoft doesn't require you to dismiss the popup, open the system settings, click the "run anyway" button, and enter a password to run an unsigned executable. Just clicking "more details -> run anyway" still exists on the SmartScreen popup, even if they've hidden it well.

Despite Microsoft's best attempts, macOS still beats Windows when it comes to terribleness for running an executable.


I just wish these companies could solve the malware problem in a way that doesn't always involve inserting themselves as gatekeepers over what the user runs or doesn't run on the user's computer. I don't want any kind of ongoing relationship with my OS vendor once I buy their product, let alone have them decide for me what I can and cannot run.

I get that if you're distributing software to the wider public, you have to make sure these scary alerts don't pop up regardless of platform. But as a savvy user, I think the situation is still better on Windows. As far as I've seen there's still always a (small) link in these popups (I think it's SmartScreen?) to run anyway - no need to dig into settings before even trying to run it.

Are you sure? I had not used Windows for years and assumed "Run Anyway" would work. Last month, I tested running an unsigned (self-signed) .MSIX on a different Windows machine. It's a 9-step process to get through the warnings: https://www.advancedinstaller.com/install-test-certificate-f...

Perhaps .exe is easier, but I wouldn't subject the wider public (or even power users) to that.

So yeah, Azure Trusted Signing or EV certificate is the way to go on Windows.


I've found that your step 6 takes the vast majority of the time I spend programming with LLMs. Like 10X+ the combined total of time steps 1-5 take. And that's if the code the LLM produced actually works. If it doesn't work (which happens quite often), then even more handholding and corrections are needed. It's really a grind. I'm still not sure whether I am net saving time using these tools.

I always wonder about the people who say LLMs save them so much time: Do you just accept the edits they make without reviewing each and every line?


You can have the tool start by writing an implementation plan describing the overall approach and key details including references, snippets of code, task list, etc. That is much faster than a raw diff to review and refine to make sure it matches your intent. Once that's acceptable the changes are quick, and having the machine do a few rounds of refinement to make sure the diff vs HEAD matches the plan helps iron out some of the easy issues before human eyes show up. The final review is then easier because you are only checking for smaller issues and consistency with the plan that you already signed off on.

It's not magic though, this still takes some time to do.


I exclusively use the autocomplete in cursor. I hate reviewing huge chunks of llm code at one time. With the autocomplete, I’m in full control of the larger design and am able to quickly review each piece of llm code. Very often it generates what I was going to type myself.

Anything that involves math or complicated conditions I take extra time on.

I feel I’m getting code written 2 to 3 times faster this way while maintaining high quality and confidence


This is my preferred way as well. And when you think about it, it makes sense. With advanced autocomplete you are:

1. Keeping the context very small 2. Keeping the scope of the output very small

With the added benefit of keeping you in the flow state (and in my experience making it more enjoyable).

To anyone that even hates LLMs give autocomplete a shot (with a keying to toggle it if it annoys you, sometimes it’s awful). It’s really no different than typing it manually wrt quality etc, so the speed up isn’t huge, but it feels a lot nicer.


Maybe it subjectively feels like 2-3x faster but in studies that measure it we tend to see smaller improvements like in the range of 20-30% faster. It could be that you are an outlier, of course.

2-3x faster on getting the code written. Fully completing a coding task maybe only 20-30% faster, if we count chasing down requirements, reviews, waiting for CI to pass so I can merge etc.

If it's stuff I have have been doing for years and isn't terribly complex I've found its generally quick to skim review. I don't need to read every line I can glance at it, know it's a loop and why, a function call or whatever. If I see something unusual I take that as an opportunity to learn.

I've seen LLMs write some really bad code a few times lately it seems almost worse than what they were doing 6 or 8 months ago. Could be my imagination but it seems that way.


Anyone can build a car that will never fall apart. It takes a great deal of engineering to build a car that just barely doesn't fall apart.

I think it's more comforting for people to believe that there are a handful of evil, mustache-twirling villains, sitting in a smokey room, plotting and directing their henchmen to carry out a conspiracy. "There are only a few bad guys, and the rest of us are just doing what we can," they can say to feel good about the world.

It's a lot more scary to admit that there is no evil puppet master running things, and it's simply that the vast majority of people in leadership positions are just awful people, acting independently, but aligned with the rest of the awful people, intent on doing whatever it takes to make line go up and to the right.


Honestly, I wouldn't even couch it as the majority of leaders being evil: it's that the systems they lead, and that we all operate within, are poorly constructed, broken, or outright corrupt, and need a concerted effort on all our parts to fix them so that they actually work for everyone, rather than funneling wealth and power to the already-wealthy and already-powerful by default.

And that's genuinely hard! Just by the nature of things, it is much, much easier to create a system that reinforces existing power structures than a system that works to subvert them and give more power to those who have little.


Why not take the simple result from economics? The bigger the economic union, the bigger the inequality. Transforming small European countries, without even harmonizing minimum pay laws and social rules, into the EU, was always going to concentrate wealth to an enormous extent.

One might even say that given the EU's history as an organization, this has always been the intent of creating it in the first place, not an accident.


The mature perspective is that there isn't one big conspiracy. There are many small conspiracies.

Not attacking you in particular, but I've always hated how we talk about "licensing restrictions" as if they're some kind of vague law of nature, like gravity. Oh, Studio X can't do Y... Because Licensing. "Licenses" are entirely conjured up by humans, and if there was an actual desire by the people who make decisions to change something, those people would find a way to make the "licensing restrictions" disappear. Reality is, the people making these decisions don't want to change things, at least not enough to go through the effort of changing and renegotiating the licenses. It's not "licensing restrictions" that is stopping them.

Same always comes up when we talk about why doesn't Company X open source their 20 year old video game software? Someone always chimes in to say "Well they don't because of 'licensing issues' with the source code." as if they were being stopped by a law of physics.


Speaking as someone who once worked at a company where these were real issues that came up - it's very often the case that intermediate parties in the contracts have dissolved.

Renegotiating the contracts would require lengthy and expensive processes of discovering the proper parties to actually negotiate with in the first place.

Although the contracts that were already executed can be relied upon, it truly is a can of worms to open, because it's not "Renegotiate with Studio X", it's "Renegotiate with the parent company of the defunct parent company of the company who merged with Y and created a new subsidiary Z" and so on and so forth, and then you have to relicense music, and, if need be, translations.

Then repeat that for each different region you need to relicense in because the licenses can be different for different regions.

The cost of negotiation would be greater than the losses to piracy tbh.


That’s why I strongly believe there needs to be term limits on these kinds of contracts. Copyright is supposed to benefit the consumer, after all.

Copyright has never been about benefitting consumers. Or artists, for that matter.

It was invented to protect publishers (printing press operators). That continues to be who benefits from copyright. It's why Disney is behind all the massive expansion of copyright terms in the last hundred years.


Yes, thank you, not enough people know this. Though, it should be inferable from the name. “Copy right” to mean “I/we retain the right to make copies”. Certainly sounds like a publisher right to me.

I'm with you in spirit, but I think you are underestimating how wide and complex the dependency trees can be in content licensing. And simplifying those licensing structures often mean removing control from individual artists, which we tend to consider a Bad Thing.

Much like local control of zoning, that is an principle that many folks take on faith as being "good" despite all the actual outcomes.

In collaborative productions it is almost never the "individual" artist anyway: it's whatever giant conglomerate bought whatever giant conglomerate that paid everyone involves as little as the union would let them get away with.


> Reality is, the people making these decisions don't want to change things, at least not enough to go through the effort of changing and renegotiating the licenses.

Which is a perfectly sensible reason for a business decision.

> "Well they don't because of 'licensing issues' with the source code." as if they were being stopped by a law of physics.

So laws should just be ignored? Issues created by human social constructs are very real.


We can change the laws. Radio stations don't have "licensing issues" with playing songs.

From another angle, if copyright were more like it was originally in the US, every single show I watched as a kid would be in the public domain, since I haven't been a kid for 28 years.


Radio is a lot simpler. Used to work in that realm back in the Napster and Kazaa days.

You have a broadcast station. You know that estimated 30k people are listening. You sell those numbers to advertisers. Now you play a song 1x, you record that fact. At the end of the month, you tally up 30k users for that artist and you cut a check to ASCAP or BMI. Thats it. You just keep track of how many plays and your audience size, and send checks monthly itemized.

They were downloading pirate Britney Spears over Napster and playing it on air. And since 100% royalties are paid for, was actually legal. Not a lawyer, but they evidently checked and was fine.

I'd like something similar for video. Grab shows however, and put together the biggest streaming library of EVERYTHING, and cut royalty checks for rights holders. But nope, can't do that. Companies are too greedy.


That shows how tech monopolies are bad for content creators.

Like Spotify monopolizing music streaming, and now creators have the choice of getting virtually nothing from Spotify or literally nothing by avoiding Spotify (unless you're already Taylor Swift).

With radio stations, no single radio station could really hold you over a barrel, because there were still a lot of other radio stations to work with.


Disobeying unjust laws is a moral imperative. Working around laws that hurt society is good for society. Changing laws that aren't benefiting society is the sign of a functioning government.

And I assume you are the final authority on which laws are unjust?

The issue is that Netflix doesn't control those restrictions, the content creators (well, rights holders) do, and their incentives don't always align.

Yea, what I mean by "people who make decisions" is everybody involved: studios, distributors, rights holders, and the maze of middlemen who have inserted themselves into the business: If all of them decided that more money could be made, if not for those pesky licenses, the "licensing problems" would immediately disappear.

And if any of them decide they are better served by the current arrangement, the licensing problems remain.

You seem to be making incredibly banal observations.


That's what governance is for, though. These laws can be changed to require collaboration or remove the artificial monopolies.

They haven't been because the people being hurt by it are way less organized than the people benefitting, not because things couldn't ever change.


Licensing is really complicated and requires lot of paper work. The best example is the music soundtracks of old TV series. They even get substituted if they don't get the proper license to stream them. So some old show get new soundtrack or background music and they don't feel the same.

Noticed that with a lot of intl shows Netflix gets the rights to. They so often have these awful chipper toony music

The discovery+ app is still operating in some regions because of licensing 3.5 years since all the discovery content got integrated into hbo-max.

This is so vague and conspiratorial, I'm not sure how it's the top comment. How does this exactly work? Give a concrete example. Show the steps. How is Palantir going to make me, someone who does not use its products, a "slave of the state?" How is AI going to intimidate me, someone who does not use AI? Connect the dots rather than making very broad and vague pronouncements.

> How is Palantir going to make me, someone who does not use its products, a "slave of the state?"

This is like asking how Lockheed-Martin can possibly kill an Afghan tribesman, who isn't a customer of theirs.

Palantir's customer is the state. They use the product on you. The East German Stasi would've drooled enough to drown in over the data access we have today.


OK, so map it out. How do we go from "Palantir has some data" to "I'm a slave of the state?" Could someone draw the lines? I'm not a fan of this administration either, but come on--let's not lower ourselves to their reliance on shadowy conspiracy theories and mustache-twirling villains to explain the world.

"How does providing a surveillance tool to a nation state enable repression?" seems like a question with a fairly clear answer, historically.

The Stasi didn't employ hundreds of thousands of informants as a charitable UBI program.


I'm not asking about how the Stasi did it in Germany, I'm asking how Palantir, a private company, is going to turn me into a "slave of the state" in the USA. If it's so obvious, then it should take a very short time to outline the concrete, detailed steps (that are relevant to the USA in 2025) down the path, and how one will inevitably lead to the other.

I'll answer with a question for you: what legitimate concerns might some people have about a private company working closely with the government, including law enforcement, having access to private IRS data? For me, the answer to your question is embedded in mine.

> I'm asking how Palantir, a private company, is going to turn me into a "slave of the state" in the USA.

This question has already been answered for you.

The government uses Palantir to perform the state's surveillance. (And in a way that does an end-run around the Fourth Amendment; https://yalelawandpolicy.org/end-running-warrants-purchasing....)

As the Stasi used private citizens to do so. It's just an automated informant.

And this is hardly theoretical. https://gizmodo.com/palantir-ceo-says-making-war-crimes-cons...

> Palantir CEO and Trump ally Alex Karp is no stranger to controversial (troll-ish even) comments. His latest one just dropped: Karp believes that the U.S. boat strikes in the Caribbean (which many experts believe to be war crimes) are a moneymaking opportunity for his company.

> In August, ICE announced that Palantir would build a $30 million surveillance platform called ImmigrationOS to aid the agency’s mass deportation efforts, around the same time that an Amnesty International report claimed that Palantir’s AI was being used by the Department of Homeland Security to target non-citizens that speak out in favor of Palestinian rights (Karp is also a staunch supporter of Israel and inked an ongoing strategic partnership with the IDF.)


Step 1, step 2, step 3, step 4? And a believable line drawn between those steps?

Since nobody's actually replying with a concrete and believable list of steps from "Palantir has data" to "I am a slave of the state" I have to conclude that the steps don't exist, and that slavery is being used as a rhetorical device.


Step 1: Palantir sells their data and analysis products to the government.

Step 2: Government uses that data, and the fact that virtually everyone has at least one "something to hide", to go after people who don't support it.

This doesn't really require a conspiracy theory board full of red string to figure out. And again, this isn't theoretical harm!

> …an Amnesty International report claimed that Palantir’s AI was being used by the Department of Homeland Security to target non-citizens that speak out in favor of Palestinian rights…


Your description is missing a parallel process of how we arrive(d) at that condition of the nominal government asserting direct control.

Corporate surveillance creates a bunch of coercive soft controls throughout society (ie Retail Equation, "credit bureaus", websites rejecting secure browsers, facial recognition for admission to events, etc). There isn't enough political will for the Constitutional government to positively act to prevent this (eg a good start would be a US GDPR), so the corporate surveillance industry is allowed to continue setting up parallel governance structures right out in the open.

As the corpos increasingly capture the government, this parallel governance structure gradually becomes less escapable - ie ReCAPTCHA, ID.me, official communications published on xitter/faceboot, DOGE exfiltration, Clearview, etc. In a sense the surging neofascist movement is closer to their endgame than to the start.

If we want to push back, merely exorcising Palantir (et al) from the nominal government is not sufficient. We need to view the corporate surveillance industry as a parallel government in competition with the Constitutionally-limited nominally-individual-representing one, and actively stamp it out. Otherwise it just lays low for a bit and springs back up when it can.


This seems like a simple conclusion, to the point where I'm surprised that no one replying to you had really put it in a more direct way. "slave of the state" is pretty provocative language, but let me map out one way in which this could happen, that seems to already be unfolding.

1. The country, realizing the potential power that extra data processing (in the form of software like Palantir's) offers, start purchasing equipment and massively ramping up government data collection. More cameras, more facial scans, more data collected in points of entry and government institutions, more records digitized and backed up, more unrelated businesses contracted to provide all sorts of data, more data about communications, transactions, interactions - more of everything. It doesn't matter what it is, if it's any sort of data about people, it's probably useful.

2. Government agencies contract Palantir and integrate their software into their existing data pipeline. Palantir far surpasses whatever rudimentary processing was done before - it allows for automated analysis of gigantic swaths of data, and can make conclusions and inferences that would be otherwise invisible to the human eye. That is their specialty.

3. Using all the new information about how all those bits and pieces of data are connected, government agencies slowly start integrating that new information into the way they work, while refining and perfecting the usable data they can deduce from it in the process. Just imagine being able to estimate nearly any individual's movement history based on many data points from different sources. Or having an ability to predict any associations between disfavored individuals and the creation of undesirable groups and organizations. Or being able to flag down new persons of interest before they've done anything interesting, just based on seemingly innocuous patterns of behavior.

4. With something like this in place, most people would likely feel pretty confined - at least the people who will be aware of it. There's no personified Stasi secret cop listening in behind every corner, but you're aware that every time you do almost anything, you leave a fingerprint on an enormous network of data, one where you should probably avoid seeming remarkable and unusual in any way that might be interesting to your government. You know you're being watched, not just by people who will forget about you two seconds after seeing your face, but by tools that will file away anything you do forever, just in case. Even if the number of people prosecuted isn't too high (which seems unlikely), the chilling effect will be massive, and this would be a big step towards metaphorical "slavery".


You mentioned you're not a fan of this administration. That's -1 on your PalsOfState(tm) score. Your employer has been notified (they know where you work of course), and your spouse's employer too. Your child's application to Fancy University has been moved to the bottom of the pile, by the way the university recently settled a lawsuit brought by the governmentfor admitting too many "disruptors" with low PalsOfState scores. Palantir had provided a way for you to improve you score, click the Donateto47 button to improve your score. We hope you can attend the next political rally in your home town, their cameras will be there to make sure.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: