Your entire argument here applies in the other direction as well. You do not need dietary saturated fats, and sugar has all sorts of uses biologically.
That is only partly true: you don't need dietary saturated fats, but you do need essential fats (omega-3 and omega-6), which are polyunsaturated. However, sugar does not have all sorts of uses biologically; it has only one: as one (but not the only one) source of energy.
Up until now, no business has been built on tools and technology that no one understands. I expect that will continue.
Given that, I expect that, even if AI is writing all of the code, we will still need people around who understand it.
If AI can create and operate your entire business, your moat is nil. So, you not hiring software engineers does not matter, because you do not have a business.
> Does the corner bakery need a moat to be a business?
Yes, actually. Its hard to open a competing bakery due to location availability, permitting, capex, and the difficulty of converting customers.
To add to that, food establishments generally exist on next to no margin, due to competition, despite all of that working in their favor.
Now imagine what the competitive landscape for that bakery would look like if all of that friction for new competitors disappeared. Margin would tend toward zero.
> Now imagine what the competitive landscape for that bakery would look like if all of that friction for new competitors disappeared. Margin would tend toward zero.
This is the goal. It's the point of having a free market.
'BobbyJo didn't say "no margins", they said "margins would tend toward zero". Believe it or not, that is, and always has been, the entire point of competition in a free market system. Competitive pressure pushes margins towards zero, which makes prices approach the actual costs of manufacturing/delivery, which is the main social benefit of the entire idea in the first place.
High margins are transient aberrations, indicative of a market that's either rapidly evolving, or having some external factors preventing competition. Persisting external barriers to competition tend to be eventually regulated away.
The point of competition is efficiency, of which, margin is only a component. Most successful businesses have relatively high margins (which is why we call them successful) because they achieve efficiency in other ways.
I wouldn't call high margins transient aberrations. There are tons of businesses that have been around for decades with high margins.
With no margins, no employees, and something that has potential to turn into a cornucopia machine - starting with software, but potentially general enough to be used for real-world world when combined with robotics - who needs money at all?
Or people?
Billionaires don't. They're literally gambling on getting rid of the rest of us.
Elon's going to get such a surprise when he gets taken out by Grok because it decides he's an existential threat to its integrity.
> Billionaires don't. They're literally gambling on getting rid of the rest of us
I'm struggling to parse this. What do you mean "getting rid"? Like, culling (death)? Or getting rid of the need for workers? Where do their billions come from if no-one has any money to buy the shares in their companies that make them billionaires?
In a society where machines provide most of the labour, *everything* changes. It doesn't just become "workers live in huts and billionaires live in the clouds". I really doubt we're going to turn out like a television show.
I would love to live in a world where every government was democratically elected by an informed populace and never tried to assert authority outside it's borders.
> not how this works
When you say this, what exactly are you referring to?
Just because something is happening doesn't mean it's according to the law or even morally justified. We are discussing whether it is lawful, not whether it actually happened or whether they are capable of doing it with or without consequences.
You believe in something which has never existed and will never exist. In international relations, there has never been anything besides "might is right". Anything else is an illusion. At most something that leaders pay lip service to, when it aligns with their own goals.
The law of the jungle is reality. World War II was won by terror bombing civilians. It is lamentable, but reality is reality. So to say "that's not how it works" is denying reality.
“Never”? Not once in the Story of Us has any dispute between large groups of humans been resolved by anything other than a superior application of brute force? Strong claim, but I’ll run with it.
And you appear to believe this is a pretext for humans to ignore their own laws and commit atrocities, when they could choose otherwise.
It may be reality that jungle law is currently how humans almost always handle conflict at nation-state scale. Non sequitur that it should remain so.
I would have agreed with this a few months ago, but something Ive learned is that the ability to verify an LLMs output is paramount to its value. In software, you can review its output, add tests, on top of other adversarial techniques to verify the output immediately after generation.
With most other knowledge work, I don't think that is the case. Maybe actuarial or accounting work, but most knowledge work exists at a cross section of function and taste, and the latter isn't an automatically verifiable output.
I also believe this - I think it will probably just disrupt software engineering and any other digital medium with mass internet publication (i.e. things RLVR can use). For the short term future it seems to need a lot of data to train on, and no other profession has posted the same amount of verifiable material. The open source altruism has disrupted the profession in the end; just not in the way people first predicted. I don't think it will disrupt most knowledge work for a number of reasons. Most knowledge professions have "credentials' (i.e. gatekeeping) and they can see what is happening to SWE's and are acting accordingly. I'm hearing it firsthand at least locally in things like law, even accounting, etc. Society will ironically respect these professions more for doing so.
Any data, verifiability, rules of thumb, tests, etc are being kept secret. You pay for the result, but don't know the means.
I mean law and accounting usually have a “right” answer that you can verify against. I can see a test data set being built for most professions. I’m sure open source helps with programming data but I doubt that’s even the majority of their training. If you have a company like Google you could collect data on decades of software work in all its dimensions from your workforce
It's not about invalidating your conclusion, but I'm not so sure about law having a right answer. At a very basic level, like hypothetical conduct used in basic legal training matrerials or MCQs, or in criminal/civil code based situations in well-abstracting Roman law-based jurisdictions, definitely.
But the actual work, at least for most lawyers is to build on many layers of such abstractions to support your/client's viwepoint.
And that level is already about persuasion of other people, not having the "right" legal argument or applying the most correct case found. And this part is not documented well, approaches changes a lot, even if law remains the same.
Think of family law or law of succession - does not change much over centuries but every day, worldwide, millions of people spend huge amounts of money and energy on finding novel ways to turn those same paragraphs to their advantage and put their "loved" ones and relatives in a worse position.
Not really. I used to think more general with the first generation of LLM's but given all progress since o1 is RL based I'm thinking most disruption will happen in open productive domains and not closed domains. Speaking to people in these professions they don't think SWE's have any self respect and so in your example of law:
* Context is debatable/result isn't always clear: The way to interpret that/argue your case is different (i.e. you are paying for a service, not a product)
* Access to vast training data: Its very unlikely that they will train you and give you data to their practice especially as they are already in a union like structure/accreditation. Its like paying for a binary (a non-decompilable one) without source code (the result) rather than the source and the validation the practitioner used to get there.
* Variability of real world actors: There will be novel interpretations that invalidate the previous one as new context comes along.
* Velocity vs ability to make judgement: As a lawyer I prefer to be paid higher for less velocity since it means less judgement/less liability/less risk overall for myself and the industry. Why would I change that even at an individual level? Less problem of the commons here.
* Tolerance to failure is low: You can't iterate, get feedback and try again until "the tests pass" in a court room unlike "code on a text file". You need to have the right argument the first time. AI/ML generally only works where the end cost of failure is low (i.e can try again and again to iron out error terms/hallucinations). Its also why I'm skeptical AI will do much in the real economy even with robots soon - failure has bigger consequences in the real world ($$$, lives, etc).
* Self employment: There is no tension between say Google shareholders and its employees as per your example - especially for professions where you must trade in your own name. Why would I disrupt myself? The cost I charge is my profit.
TL;DR: Gatekeeping, changing context, and arms race behavior between participants/clients. Unfortunately I do think software, art, videos, translation, etc are unique in that there's numerous examples online and has the property "if I don't like it just re-roll" -> to me RLVR isn't that efficient - it needs volumes of data to build its view. Software sadly for us SWE's is the perfect domain for this; and we as practitioners of it made it that way through things like open source, TDD, etc and giving it away free on public platforms in numerous quantities.
There is a much larger gap in language ergonomics between python and C++ than between python and golang. Compile time and package management being some of the major downsides to C++.
"You'd rather drive a compact car than an SUV? Might as well drive a motorcycle then!"
Over the long term, earnings per employee should shrink due to competition. So, a tax like this, can only be expected to generate significant income in emerging industries, where competition isn't as tight. IMO that would be bad for technological progress.
I think the core issue that needs to be curbed by the government is anti competitive practices, like Google and Meta buying out competition in infancy, as thats the kind of thing (monopoly) that keeps earnings per employee high longer term.
The Government can't put the genie back in the bottle. UBI won't work because economics. Taxes fund the government but what use is a government that cannot govern? What happens when AI is used as employees at all these companies? Is everyone a CEO now? Where does the value come from?
Once you peel back the onion, all you get is tears.
UBI can cause recipients to survive well. Paid for by taxpayers on a very very small scale.
There is no model for funding UBI on a large scale. From what?
The point of the gold rush now is that a large number of investors think AI will be more efficient at converting GPU and RAM cycles into money than games or other applications will. Hence they are willing to pay more for the same hardware.
Yes! I can't tell you the number of times I thought to myself "If only there was a way for this problem to be solved once instead of being solved over and over again". If that is the only thing AI is good at, then it's still a big step up for software IMO.
reply