Hacker Newsnew | past | comments | ask | show | jobs | submit | ible's commentslogin

People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.

The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.

If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.


I'm optimistic.

Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.

There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".

You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.


"I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better."

I'm not sure most of those organizations will have many customers left, if every white collar admin job has been automated away, and all those people are sitting unemployed with whatever little income their country's social safety net provides.

Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.


> Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.

Yes, that's what happens. All those people find other jobs, do other work, and that new work is usually much less boring than the old work, because boring work is easier to automate.

Historically, economies have changed and grown because of automation, but not collapsed.


AI agents might be able to automate 80% of certain jobs in a few years but that would make the remaining 20% far more valuable. The challenge is to help people rapidly retrain for new roles.

Humans will continue to have certain desires far outstripping the supply we have for a long time to come.

We still don’t have cures for all diseases, personal robot chefs & maids, and an ideal house for everyone, for example. Not all have the time to socialize as much as they wish with their family and friends.

There will continue to be work for humans as long as humans provide value & deep connections beyond what automation can. The jobs could themselves become more desirable with machines automating the boring and dangerous parts, leaving humans to form deeper connections and be creatively human.

The transition period can be painful. There should be sufficient preparation and support to minimize the suffering.

Workers will need to have access to affordable and effective methods to retrain for new roles that will emerge.

“soft” skills such as empathetic communication and tact could surge in value.


> The jobs could themselves become more desirable with machines automating the boring and dangerous parts

Or, as Cory Doctorow argues, the machines could become tools to extract "efficiency" by helping the employer make their workers lives miserable. An example of this is Amazon and the way it treats its drivers and warehouse workers.


That depends on the social contract we collectively decide (in a democracy at least). Many possibilities will emerge and people need to be aware and adapt much faster than most times in history.


An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies.

> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.

Managing risks, can't automate it. Every project and task needs a responsibility sink.


You can bound risk on ai agents just like an atm. You just can’t rely upon the ai itself to enforce those limits, of course. You need to place limits outside the ai’s reach. But this is already documented best practice.

The point about ai not having “skin” (I assume “skin in the game”) is well taken. I say often that “if you’ve assigned an ai agent the ‘a’ in a raci matrix, you’re doing it wrong”. Very important lesson that some company will learn publicly soon enough.


> Every project and task needs a responsibility sink.

I don't disagree, though I'd put it more as "machines cannot take responsibility for decisions, so machines must not have authority to make decisions".

But we've all been in meetings where there are too many people in the room, and only one person's opinion really counts. Replacing those other people with an LLM capable of acting on the decision would be a net positive for everyone involved.


> Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. > > And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.

I don't mean to pick on your example too much. However, when I worked in financial audit, reviewing journal entries spit out from SAP was mind numbingly boring. I loved doing double-entry bookkeeping in my college courses. Modern public accounting is much, much more boring and worse work than it was before. Balancing entries is enjoyable to me. Interacting with the terrible software tools is horrific.

I guess people that would have done accounting are doing other, hopefully more interesting jobs in the sense that absolute numbers of US accountants is on a large decline due to the low pay and the highly boring work. I myself am certainly one of them as a software engineer career switcher. But the actual work for a modern accountant has not been improved in terms of interesting tasks to do. It's also become the email + meetings + spreadsheet that you mentioned because there wasn't much else for it to evolve into.


I did qualify it with "most people" because of people like you who enjoy that kind of work :).

I would hate that work, but luckily we have all sorts of different people in the world who enjoy different things. I hope you find something that you really enjoy doing.


That's fair enough! Most accountants I crossed paths with dislike the modern type of busy work. Lots of marking up spreadsheets and PDFs rather than crafting journal entries. Luckily for me, I switched into software engineering, and system design scratches that same exact itch. Now I build software for financial auditors to make their work less dreadful, and it's very popular with them because they don't have to do as much of the terrible, modern tasks that have befallen them with advances in accounting tech.


> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.

it's interesting how it's never your job that will be automated away in this fantasy, it's always someone else's.


I have absolutely had that job, and it sucked. I also worked as a farm hand, a warehouse picker, a construction site labourer, and a checkout clerk. Most of that work is either already automated or about to be, thankfully.


"benefits" = shareholder profits ++


Workshopping this tortured metaphor:

AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.

The owners of the tech need to reinvest in the hosts.


Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.


Yeah I feel like the real ah ha moment is still coming once there is a GPT-like thing that has been trained on reality, not its shadow.


Yes and reality is the hard part. Moravec’s Paradox [1] continues to ring true. A billion years of evolution went into our training to be able to cope with the complexity of reality. Our language is a blink of an eye compared to that.

[1] https://en.wikipedia.org/wiki/Moravec's_paradox


Reality cannot be perceived. A crisp shadow is all you can hope for.

The problem for me is the point of the economy in the limit where robots are better, faster and cheaper than any human at any job. If the robots don’t decide we’re worth keeping around we might end up worse than horses.


but that crisp shadow is exactly what we call perception


Look I think that is the whole difficulty. In reality, doing the wrong thing results in pain, and the right thing in relief/pleasure. A living thing will learn from that.

But machines can experience neither pain nor pleasure.


There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.


> What happens when there are no more hosts to donate more training-blood?

LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.


People are animals.


When horses develop technology and create all sorts of jobs for themselves, this will be a good metaphor.


The average person doesn't develop technology or create jobs for themselves.



Sounds like something a goat lover would say..


I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.


Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.

But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.

I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.

Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.


Charlie Chaplin's speech is more relevant now than ever before:

https://www.youtube.com/watch?v=J7GY1Xg6X20


I first saw this about 15 years ago and it had a profound impact on me. It's stuck with me ever since

"Don't give yourselves to these unnatural men, machine men, with machine minds and machine hearts. You are not machines, you are not cattle, you are men. You have the love of humanity in your hearts."

Spoken 85 years ago and even more relevant today


The thing that the ultra-wealthy desire above all else is power and privilege, and they won't be getting either of that in those bunkers.

They sure as shit won't be content to leave the rest of us alone.


Yeah I know it's an unrealistic ideal but it's fun to think about.

That said my theory about power and privilege is that it's actually just a symptom of a deep fear of death. The reason gaining more money/power/status never lets up is because there's no amount of money/power/status that can satiate that fear, but somehow naively there's a belief that it can. I wouldn't be surprised if most people who have any amount of wealth has a terrible fear of losing it all, and to somebody whose identity is tied to that wealth, that's as good as death.


Going off your earlier comment, what if instead of a revolution, the oligarchs just get hooked up to a simulation where they can pretend to rule over the rest of humanity forever? Or what if this already happened and we're just the peasants in the simulation


This would make a good black mirror episode. The character lives in a total dystopian world making f'd up moral choice. Their choices make the world worse. It seems nightmarish to us the viewer. Then towards then end they pull back, they unplug and are living in a utopia. They grab a snack, are greeted by people that love and care about them, then they plug back in and go back to being their dystopian tech bro ideal self in their dream/ideal world.


I like this future, the Meta-verse has found its target market


> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.

You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.


I was trying to phrase something like this, but you said it a lot better than I ever could.

I can’t help but smile at the possibility that you could be a bot.


It may not be obvious if you are in the US but the reaction of Canadians to Trump’s 51st state garbage is extremely strong in Canada.

The tariffs are one thing, and pissed people off, but the rhetoric is what has really done the damage.

It’s viewed as a complete betrayal, and as a real and serious threat to Canadian sovereignty.

I work for an American company from Canada, and have changed my financial planning because I’m not sure if I’ll be able to keep doing that.

When I see a 70% drop I’m surprised it isn’t more.


I spent ~18 years in Canada, and my professional network is strongest there. I've lost 60% of my pipeline of leads since December.


That product name sucks for Veo the AI sports video camera company who literally makes a product called the Veo 2. (https://www.veo.co)


I tried it and it was laughably bad. The auto park was worse than it was years ago, leaving me a foot out from the curb in an easy spot. Trying to use the self driving on city streets resulted in it stuttering and stopping immediately. The only thing that worked decently was highway driving, which it does without the self driving package anyway.

Given its performance I wouldn't dare trust it with any of my daily driving even if it was free.


Even on a law and implementation effort like this with such large impact there are maybe 10,000 people in the world involved who actually understand it in any significant sense, and they all have a strong reason not to post about it on a public forum, or even a private one.

And even those people will only understand a limited aspect or perspective.

So the people commenting can only comment based on their outside impressions and emotions about generalities and how the specific implementation details seem to affect they as an end user.

I try to take anything said with that in mind. There is information in the comments about user experience but anything else is at the level of bullshitting at the bar with your friends, and not to be taken personally.


> I found it quite frustrating how teams would be legitimately actively pursuing ideas that would be good for the world, without prioritizing short-term Google interests, only to be met with cynicism in the court of public opinion.

This is part and parcel of working for a visible/impactful organization. People will constantly write things, good and bad about the organization. Most of them, good and bad, will be wrong. They'll be based on falsehoods, misinterpretations, over-simplifications, political perspectives, etc.

This becomes a problem when people in the company assume that because most of the feedback is nonsense, that all of it is nonsense. That is especially temping when the feedback is hurtful to you or critical of your team or values.

I found a bit of Neil Gaiman's MasterClass very helpful when reading such feedback. Very roughly Gaiman said that when someone is telling you something doesn't work for them, and what you should do to fix it, you should believe them that it doesn't work for them, but that the author is much better placed than the reader to know how and if to fix it.

In my context I try to understand why someone is saying something, what information I can take from it, and whether there is anything within my expertise, control, or influence that can or should be done about it.

(If you take anything from this comment, I think it should be to go listen to Neil Gaiman talk about anything!)


This is actually a great argument for a strong social democracy/welfare state.


Sort of.

If everyone does what they want - who does the jobs no one wants?


Automation and/or higher pay tends to make that not a problem.


Ah yes, the Keynes Ex Machina


automation is just going to take jobs and the higher level jobs will be unobtainable by the working class who won't have the income or childhood stability to afford obtain the education necessary for the jobs not taken by automation.

people talk about UBI (which i used to be for, but upon deeper investigation I feel is problematic) and about living wages for the lowest of jobs, but the USA is wholly captured by capital. We have two right wing parties that are controlled by industry. Neither of these things will never come unless its to hold off a violent revolution.


What an excellent demonstration of what values/norms a group really cares about.

Biggar violated the all time favourite in-group rule: Don't talk out of school. Don't talk about fight club. Don't snitch.

The other founders violated a norm against pushing yourself ahead and taking advantage of others that doesn't even seem to hold in many groups, especially upper class/wealthy ones.


Here's an analogy. You can get fired for always being late to work, but not get fired for bad behaviour (even crime) outside of work... say drunk driving.

This doesn't mean that being late to work is worse than drunk driving, or that person A is worse than person B. Not everything is a general judgement on worth or character.


I don't think that analogy applies. Both parties in this story did things in a 'work' context.

If I go on a work forum and describe my bad behaviour, behaviour that is harmful to others, and advocate for others to do it, I'm going to get in trouble, and possibly fired.

If I publicly discuss private work information, I'll definitely get fired.

If I mention the bad behaviour of someone at work publicly, without naming names, I might get a talking to, but probably won't be fired.

How a group reacts to those different things over time defines the norms and culture of the group.


Fair point.

I still think the conclusion applies though. Not everything is a judgment on overall worth or character. Most rules exist for banal reasons. Arrive on time, so we can open on time. Maintain confidentiality, so that we can have a non public forum. If heated arguments are settled by going to twitter, that's a cultural norm that negates private forums. It's not a moral norm, necessarily, but an operational one.

Very few things are absolute though. If someone brags about murder, and that confidentiality is maintained then it certainly does say something about norms and culture of a group. That said, naughtiness is an explicit part of YC culture, for better or worse.

In any case, sometimes there are choices. Civil disobedience can lead to consequences, to make another analogy. People participating in it accept that.

IDK what actually happened on the private forum, but I imagine this is an argument that spilled out from private to public. If Paul considered this a "the world must know" situation, then maybe he considers the price worth paying.


I have yet to see an actual source for this "bad behaviour, behaviour that is harmful to others, and advocate for others to do it" part.


Though based on dasickis comment, it seems there may not have been bad behaviour in the first place.


That is not a good analogy because most people's employment is "at will", meaning that they can be fired at any time for any reason other than discrimination against a protected class or retaliation for some protected activity (e.g., being a government whistleblower). If an employer wants to fire an employee who got arrested for drunk driving outside of work, that's usually not a problem.


>say drunk driving.

Unless of course, your job is driving.


Other issues aside, "A is just a complicated B, so apply the same rules to A as B" is not a formula for good decision making.

It's a common way to make bad mistakes though.


The OP has a spesific and accurate statement about danger to life and limb. A generic sounbyte does not make a convincing counterpoint


That's a terrible and misleading paraphrase of the above post, whose main point was contrasting the extreme risks inherent to auto repair to the mundane risks of computer repair. Which makes some specific arguments brought up against right to repair like battery replacement being too dangerous for independents/end users seem unconvincing. Perticularly because vehicles themselves have dangerous batteries.

Here's an allegation of that argument being brought up by Apple.

https://www.forbes.com/sites/ewanspence/2019/05/01/apple-iph...


However IMO this is a case of "A is much more complex and dangerous than B, so there is no reason to have B driven by more secretive and restrictive standards than those applied to A"


A Rectangle is just a complicated Square, so let’s just subclass. That should work out okay.

https://en.m.wikipedia.org/wiki/Circle–ellipse_problem


Trying to design for many unknown futures is expensive.

Changing in the future costs something.

Designing for future changes up front makes sense if cost(future change) > cost(future proofing)

SAAS? Do virtually no future proofing.

IOT, do some.

Space probe? Do lots.

Also, if you haven't built quite a few relatively similar systems, don't do future proofing without talking to people who have.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: