Hacker Newsnew | past | comments | ask | show | jobs | submit | vjk800's commentslogin

> The perception of Spain is much more positive in the Anglophone world - it's viewed as a country where cost of living is low, you can nap in the middle of the day, the women/men are hot and easy, the wine is great and cheap, and you can party late at night.

If you're a tourist, you get to experience only those parts. If you live there, you have to experience the other 99% of the life also and it's not so great.


Did you even read the second sentence?

Tractors largely replaced human labour in farming about a hundred years ago. Should we have started taxing tractors?

I really have difficulties seeing AI as anything else than yet another type of machinery. If your argument is "but it's replacing ALMOST ALL human labour" - well, the same argument was valid for tractors a hundred years ago (when almost everyone was employed in agriculture).


This argument hinges rather strongly on whether or not AI is going to create a broad, durable, and lasting unemployment effect.

Tractors did not cause this phenomenon because jevons paradox kicked in and induced demand rendered the problem moot, or demand eventually exceeded what mere tractors were capable of doing for agricultural productivity.

The same can probably be said for contemporary AI, but it's tough to tell right now. There's some scant indications we've scaled LLMs as far as they can go without another fundamental discovery similar to the attention paper in 2017. GPT-5 was underwhelming, and each new Claude Opus is an incremental improvement at best, still unable to execute an entire business idea from a single prompt. If we don't continue to see large leaps in capability like circa 2021-2022, then it can be argued jevons paradox will kick in here and at best LLMs will be a productivity multiplier for already experienced white collar workers - not a replacement for them.

All this being said, technological unemployment is not something that will be sudden or obvious, nor will human innovation always stay under jevons paradox, and I think policymakers need to seriously entertain taboo solutions for it sooner or later. Such as a WPA-style infrastructure project or basic income.


> technological unemployment is not something that will be sudden or obvious

I already have friends experiencing technological unemployment. Programmers suddenly need backup plans. Several designers I know are changing careers. Not to mention, the voiceover artist profession will probably cease to exist besides this last batch of known voices. Writer, editor - these were dependable careers for friends, once. A friend travelled the world and did freelance copyediting for large clients.

ChatGPT was just released three years ago.


People keep trying to tie these two things together, forgetting the fact that ZIRP also ended 3 years ago, and that combined with the end of the COVID-era employer credits are when the layoffs really began. I won't say LLMs are having no impact at all on employment, but not to the degree where the job pool has dried up. Companies were encouraged to over-hire for years, and now that the free money is gone, they're acting logically. I believe if ZIRP came back we'd see workforces expand again and AI would just be seen as another useful tool.


The mishandling of how they rewrote section 174 of the tax code also caused a lot of layoffs of developers.


Only in the US but ZIRP and redundancies have been worldwide


ZIRP, IRS Section 174, and irrationally exuberant over hiring caused the first few rounds of layoffs.

The layoffs you see now are due to offshoring disguised as AI taking over. Google, Amazon, and even Hollywood are getting in on the offshoring craze.


> Programmers suddenly need backup plans.

Yup, Claude Opus 4.5 + Claude Code feels like its teetering right on the edge of Jevon's Paradox. It can't work alone, and it needs human design and code review, if only to ensure it understands the problem and produces maintainable code. But it can build very credible drafts of entire features based on a couple of hours of planning, then I can spend a day reading closely and tweaking for quality. But the code? It's professional work, and I've worked with contractors who did a lot worse.

So right now? Opus 4.5 feels like an enormous productivity booster for existing developers (which may indirectly create unemployment or increase the demand for software enough to create jobs), but it can't work on large projects on an ongoing basis without a knowledgeable human. So it's more like a tractor than anything else: It might cause programmer unemployment, but eh, life happens.

But I can increasingly see that it would only take about one more breakthrough, and next gen AI models might make enormous categories of human intellectual labor about as obsolete as the buggy whip. If you could get a Stanford grad for a couple of dollars an hour, what would the humans actually do? (Manual labor will be replaced slower. Rod Brooks from the MIT AI Lab had a long article recently on state of robotics, and it sounds like they are still heavily handicapped by inadequate hardware: https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex... )

Jevon's Paradox and comparative advantage won't protect you forever if you effectively create a "competitor species" with better price-performance across the board. That's what happened to the chimps and Homo neanderthalensis. And they didn't exactly see a lot of economic benefits from the rise of Homo sapiens, you know?


In my experience the code quickly becomes less than professional once the human stops monitoring what's going on.

"Inadequate hardware" is a truly ridiculous myth. The universal robot problem was, and is, and always will be an AI problem.

Just take one long look at the kind of utter garbage human mind has to work with. It's a frame that, without a hideous amount of wetware doing data processing, can't even keep its own limbs tracked - because proprioreception is made of wet meat noise and integration error. Smartphones in 2010 shipped with better IMUs, and today's smartphones ship with better cameras.

Modern robot frames just have a different set of tradeoffs from the human body. They're well into "good enough" overall. But we are yet to make a general purpose AI that would be able to do "universal robot" things. We can't even do it in a sim with perfect sensors and actuators.


Read Brooks' argument in detail, if you haven't. He has spent decades getting robots to play nicely in human environments, and he gets invited to an enormous number of modern robotics demonstrations.

His hardware argument is primarily sensory. Specifically, current generation robots, no matter how clever they might be, have a physical sensorium that's incredibly impoverished, about on par with a human with severe frostbite. Even if you try to use humans as teleoperators, it's incredibly awkward and frustrating, and they have to massively over-rely on vision. And fine-detail manual dexterity is hopeless. When you can see someone teleoperate a robot and knit a patterned hat, or even detach two stuck Lego bricks, then robots will have the sensors needed for human-level dexterity.


I did read it, and I found it so lacking that it baffles me to see people actually believe it to be a well-crafted argument.

Again: we can't even make a universal robot work in a sim with perfect sensor streams! If the issue was "universal robots work fine in sims, suffer in real world", then his argument would have had a leg to stand on. As is? It's a "robot AI caught lacking" problem - and ignoring the elephant in the room in favor of nitpicking at hardware isn't doing anyone a favor.

It's not like we don't know how to make sensors. Wrist-mounted cameras cover a multitude of sins, if your AI knows how to leverage them - they give you a data stream about as rich as anything a human gets from the skin - and every single motor in a robot is a force feedback sensor, giving it a rudimentary sense of touch.

Nothing stops you from getting more of that with dedicated piezos, if you want better "touchy-feely" capabilities. But do you want to? We are nowhere near being limited by "robot skin isn't good enough". We are at "if we made a perfect replica of a human hand for a robot to work with, it wouldn't allow us to do anything we can't already do". The bottleneck lies elsewhere.


The refrigerator put paid to the shipping-ice-from-the-arctic-circle industry quickly as well. The main shock is for the people who write stuff we read, as they never expected to be in a profession that could be automated away. Lots and lots of stuff has been automated away, but we never heard their voices.


I think it's too early for AI to have impacted software work at a systemic level. There are various reasons the market is crap right now, like how you're (perhaps unknowingly) competing with cheap foreign labor in your own metro centers for tech work.

AI is just the other pincer that will finish the kill shot.


> This argument hinges rather strongly on whether or not AI is going to create a broad, durable, and lasting unemployment effect.

I think GP's argument makes a pretty strong case that it won't, even if AI somehow successfully automates 99% of all currently existing tasks. We automated away 99% of jobs once during the agricultural revolution and it didn't result in "a broad, durable, and lasting unemployment effect" then. Quite the opposite in fact.

Maybe if AI actually automates 100% of everything then we'll need to think about this more. But that seems unlikely to happen anytime in the foreseeable future given the current trajectory of the technology. (Even 50% seems unlikely.)


> The same can probably be said for contemporary AI, but it's tough to tell right now

The same can't even be said for contemporary AI, because lots of the jobs it's going to replace are theoretical or hype. Self-driving cars should've been here years ago, but because AI is extremely hard to improve upon once it gets to a certain level of efficacy, they haven't happened.

The question is: should we be discussing this stuff when AI hasn't started taking all those jobs yet?


I think it's fine to discuss solutions to hypothetical future problems as long as it's clear that these are hypothetical future problems you're talking about, not present reality.

In many of these discussions that line seems to get blurred and I start to get the impression people are using the specter of a vague, poorly understood hypothetical future problem to argue for concrete societal changes now.


>The same can probably be said for contemporary AI, but it's tough to tell right now. There's some scant indications we've scaled LLMs as far as they can go without another fundamental discovery similar to the attention paper in 2017. GPT-5 was underwhelming, and each new Claude Opus is an incremental improvement at best, still unable to execute an entire business idea from a single prompt. If we don't continue to see large leaps in capability like circa 2021-2022, then it can be argued jevons paradox will kick in here and at best LLMs will be a productivity multiplier for already experienced white collar workers - not a replacement for them.

The NBA has an incredibly high demand for 14-foot-tall basketball players, but none have shown up to apply. Similarly, if this causes our economy to increase demand for people to "execute an entire business ide from a single prompt", it does not mean unemployment can be alleviated by moving all the jobless into roles like that.

We don't need science fiction AI that will put everyone out of work for it to be ruinous. We only need half-assed AI good enough that they don't want to pay a burgerflipper to flip burgers anymore, and it'll all go to hell.


When most of the human population were farmers should we have taxed advances in agriculture which destroyed the everybody’s job?


Yes


> each new Claude Opus is an incremental improvement at best, still unable to execute an entire business idea from a single prompt.

If your way of evaluating the progress of AI is a binary one, then you'll see no progress at all until suddenly it passes that bar.

But seeing that we do have incremental improvements on essentially all evals (and my own experience), even if it takes another decade we should be planning for it now. Even if it does require an entirely fundamental breakthrough like the attention paper, given the amount of researchers working on it, and capital devoted to it, I wouldn't put any money against such a breakthrough arriving before long.


Basic income doesn’t do anything. We already have food stamps and so on. The largest sector of US federal spending is health and social welfare. We’d have to end pretty much all those programs to run a minuscule basic income.


> We’d have to end pretty much all those programs to run a minuscule basic income

Isn't ending all those programs one of the core ideas of universal basic income? Instead of having a huge bureaucracy administering targeted social welfare you cut all the overhead and just pay everyone enough to exist, regardless of whether you actually need it. It'd still be more expensive, but giving people something dependable to fall back on would hopefully increase innovation and entrepreneurship, offsetting some of the costs


Okay so let’s divide the US federal budget by the number of people. So $21k per person. Now what happens to the guy who needs dialysis. It costs $60k. Right now the federal government pays. Now it’s given him a third the cost back. He just dies?


He gets it on the healthcare like all developed nations.

That’s a matter of where you get your taxes from. Plenty of corporations can afford to pay a more fair share. And studies on basic income have so far shown it to be effective.


> Plenty of corporations can afford to pay a more fair share

Can we stop pretending with the word "fair"? If you want to squeeze out more money then you do it by force. It's not "fair". It's just "we can do this".


If everything's automated then you don't need taxes to pay people.


Let me know when we live in The Culture, but I’ve got a feeling fully automated luxury gay space communism is a long ways off


Then what's the problem? AI is a problem (apparently) if everything is automated. Otherwise people have jobs and carry on as before.


Imagine a society that is halfway to that. So, say, there are only enough jobs for half of the people, but the rest still want to eat.


Studies on basic income have shown that it's harmful to the people who receive it.

They report no improvements on any measured outcome. Not lower stress, not more education, not better health. They work a bit less but that doesn't help them or their kids.

Over the long term it harms them because their productive skills, values, and emotional capacities atrophy away from lack of use.


> Studies on basic income have shown that it's harmful to the people who receive it.

That's extremely interesting, can you link such studies?


This podcast covers a bunch of it: https://www.youtube.com/watch?v=S5nj3DLvT64

It's one of those things that can be tricky to research because almost all the researchers and journalists on the topic very much don't want to see this conclusion. So there's a tremendous amount of misrepresentation and wishful reasoning about how to interpret the data. The truth comes out from actually reading the data, not researcher or journalist summaries.


"Final verdict on Finland's basic income trial: More happiness but little employment effect"

https://yle.fi/a/3-11337944 https://www.helsinki.fi/en/news/fair-society/universal-basic...

so basic income caused more happiness, less stress. but those are not profitable things, so, no basic income in finland.


What’s the alternative, if AI does turn out to be able to replace large swathes of the workforce? Just kill everyone?

You could ban it and then turn all existing employment into a makework jobs program, but this doesn’t seem sustainable: work you know is pointless is just as psychically corrosive, and in any event companies will just leave for less-regulated shores where AI is allowed.


what studies are those?


>Over the long term it harms them

Yes, but not for the reasons you state. It harms them because we have an zero desire as a society to effectively combat inflation, which negates any benefits we can give people who receive the basic income.

The powers-that-be don't take action to make sure the people who get basic income can actually use it to improve their lives. Food prices rapidly inflate, education costs skyrocket, medical costs increase exponentially almost overnight.

Much like how the government backstopping student loans basically got university costs to jump, promising to give people a basic income while not addressing the root causes of inequality and wealth disparity just makes things worse.

If you want basic income to truly work, you have to engage in some activities in the short term that are inherently un-capitalistic, although if done correctly, actually improve capitalism as a whole for society. Price controls and freezes, slashing executive pay, increasing taxes on the wealthiest, etc.


Whats the alternative? Kill off all humans replaced by AI unable to do something else for a living? Its sad enough that there are food stamps given the amount of food that regularly ends up in a dumpster on a daily basis. Humans come first, not machinery.


Nobody needs to kill anyone, people will just stop having kids which is what’s happening


Whats with the people already alive? If u continuously replace them with AI u need to support them in case of their inability to provide for themselves. Im afraid the worldwide available social security nets in place aren't made for withstanding this kind of unemployment.


They’ll have to adapt like every other generation has had to

My grandmother was born in 1924 and died in 2019 please appreciate how much change she had to adapt to over that period


Your grandma had plenty of opportunities in the post war eras. During her time there was always a need for human workers. While I dont think AI can actually replace anyone reliably, I still can see how executives buy into this promise and try it. This is a unique situation humanity never was confronted with. Even the industrialization required a lot of human work. If all white collar jobs went away there is a huge imbalance in available workers vs available work. Simply adapting to this isn't a thing given that monopolies killed competition and its not feasible for your everyday Joe to break into markets anymore. Kudos to your grandma for making it this long, simply not a comparable situation however.


Survivorship bias

What you’re not counting is all of the millions of people who died because they couldn’t actually adapt to the new world

Which is fine but they didn’t need to be killed, they just became irrelevant and went away


So you are the type of person that actively contributes to the world being as shit as it is. Good to know. Your disregard for the weak disgusts me. Have a good evening.


So what are you going to do about it? You should probably do something then


And when did work done by humans stop existing between 1924 and 1990? Because that's the type of change we are talking about.


Well considering that she had a bunch of secretaries doing typing for her as a bank manager then transitioned to a world where there were no typists anymore was a pretty explicit change from her perspective.

She never learned how to type on a keyboard so you do the math


Well the math is that the amount of jobs done by humans in that period of time is above zero.


The math is: her job disappeared so she had to retire to a low income housing unit funded by HUD in Houston


The people who own the magical AIs won't decide that they want to keep us all as pets, we won't have leverage to demand that they keep us all as pets, and they will have the resources to make sure they no longer need to keep us as pets. Shouting "You should keep humans as pets" is unlikely to change this fundamental equation.


>The largest sector of US federal spending is health and social welfare.

On old people who can't or don't work.


they will likely die first when society collapses


I think you're basing AI only on modern 2025 LLMs.

If there is a magnitude increase in compute (TPUs, NPUs, etc) over the next 3-5 years then even marginal increases in LLM usability will take white collar jobs.

If there is an exponential increase in power (fusion) and compute (quantum) combined with improvements in robotics and you're in the territory where humans can entirely be replaced in all industries (blue collar, white collar, doctors, lawyers, etc).


OTOH if there is worldwide catastrophic economic collapse due to climate change none of these things will get built.

In French we say "With "ifs" you can put Paris in a bottle."


Where does all the power come from? Compute increases have to has sustainable power source and we don’t have that.


We didn’t tax tractors, but we did tax the expanded economy tractors enabled, and built institutions to manage the transition.

Ex-farmhands had time to move into new jobs created by the Industrial Revolution, and it took decades. People also moved into knowledge work. What happens when AI takes all those jobs in far less time, with no other industries to offer employment?

If AI makes a few people trillionaires while hollowing out the middle class, how do we keep the lights on?


> If AI makes a few people trillionaires while hollowing out the middle class, how do we keep the lights on?

Tax the thing you care about? You don't need to care really about the definition of AI or what an AI is or anything like that, you care that some people got trillions.

Tax "making an absolute shitton of money" or "being worth an insane amount". Taxing AI specifically means you're absolutely fucked if Altman turns out to not earn that much but someone who makes a specific connector to data centres is the richest person in the world. Is Nvidia an AI company? What is AI? *Who cares?* The point is to figure out some way as a society of continuing.


Easy to say in a online forum, I imagine this could quite literally start a civil war in some nations.


There is a scene of wealth transfer agent simulations. With some dynamics you easily end up in a situation where after enough transactions, all of the wealth is concentrated on one single agent. Think about "I am the state" but extended to the whole world. Billionaires trying to affect countries' elections seems child's play compared to that.


Tax is kinda tangential to all of this, but:

> Tractors largely replaced human labour in farming about a hundred years ago

And what happened around that time? yeah it wasn't a period of smooth calmness was it? Periods of massive changes in productivity (ie lots of people going into unemployment) causes huge societal changes.

The thing that staved off revolution in the US was lots of spending, banking regulations, federal reserve, new deal and the like. Those that didn't do that, fell.

So its less about who pays tax, and more about who is going to give money to the unemployed?


If something transformative is just coming in and threatens the economic flows that sustains your social model, it is worth asking the question of how the economic flows should be proactively updated moving forward.

The tractor created the middle class by giving more people access to jobs that paid better and provided more free time. It is yet to be proven who will benefit from the advancement of LLMs, but there is some consensus in the article that the large companies operating these LLMs will be. From there, proposing taxes on that additional profit doesn't seem ridiculous.


Where do you live? Are tractors not taxed as motor vehicles in your country?


Normally not unless they travel on public roads. If it's just used for farming it doesn't have to be registered.


If we tax at the state level like motor vehicles, what happens when AI living in a Texas datacenter replaces 1 million jobs in Florida? The people are in Florida, but the money goes to Texas.


Nope.

They even get to use fuel that has less taxes on it since they don't drive on public roads.


Tractors replaced a task in specific fields, farming, construction largely. AI seems to have the potential to cover more territory. The potential blast radius is greater.


Agricultural jobs accounted for more than 80% of the preindustrial workforce. Granted you still needed people to maintain the jobs and some roles weren't entirely replaced or replaceable. I wonder how the two compare. I will say that AI has the opportunity to affect many lines of work which makes it scary for many.


80% was peak agriculture, but involvement was already in decline before the tractor. Necessarily so — nobody would have had time to create the tractor if they were still busy toiling in the field. The tractor was the final death knell, I suppose, but only around 40% of the workforce was involved in agriculture by the time the tractor started showing up on farms.


> AI seems to have the potential to cover more territory.

Theoretically, the only two things that any human go to earn money is (a) use muscles or (b) use brains.

I feel like AI plus robots covers all the territory. Maybe not quite yet - maybe we have a few more years, but what job could a human do that couldn't be done by an AI controlling a humanoid robot?


I'm not arguing for taxing AI (or tractors) -- but...if we made the wrong decision 100yrs ago, should we make the wrong decision again? It is worth debating.


The wrong decision wasn't using productivity enhancers - it was building a society around the idea that everyone MUST have a job, even in the presence of substantial productivity enhancers which massively decrease the number of jobs. We've scraped by so far... so far.


The problem is that for the vast majority of people to be psychologically healthy they must have a job. This isn't a societal decision, it's a reality about how humans are.

The alternative is like feeding an animal instead of letting it live the lifestyle it's adapted for. That helps it in the moment but over time its capacities atrophy and it ends up weakened, twisted and harmed with nothing to spend its natural instincts on.


> The problem is that for the vast majority of people to be psychologically healthy they must have a job. This isn't a societal decision, it's a reality about how humans are.

The "job" can be things like volunteering, artwork, finding a cause, inventing, raising children, teaching...

Work can be subsidized and based around personal interest and achieve the "psychologically healthy" aspect that you describe.


> volunteering

Sure, I guess -- if you're not charging for your time, it's more efficient to use human labor than AI+robots.

> inventing

If we get working AI, humans will be unemployable at inventing useful things.

> teaching

There are already multiple startups trying to replace teachers in the classroom.


> If we get working AI, humans will be unemployable at inventing useful things.

The point you're responding to is that humans would be able to do it for personal fulfillment and thus preserve their mental health, not to be useful to someone else.


Yeah. I also hope the AI remembers to flick the bundle of feathers on a stick to entertain me, and fill the food bowl when I'm hungry.

> inventing

When they used to say that you'd make more money going to university, that is what they were talking about. The idea was that if you went into the research labs you'd develop capital to multiply human output, which is how you make more money. Most ended up confusing the messaging with "go to university to get a job — the same job you would have done anyway..." and incomes have held stagnant as a result. It was an interesting dream, though.

But not really what everyday normal people want. They like to have somewhere they can show up to and be told what to do, so to speak.


They must have something interesting to do. It doesn't have to be a job.

The ideal society is one where humans only do things that they actually enjoy doing, whatever that is, and automation does the rest. Any human being forced to perform labor not because they want to, but because they need to do so to survive, should be considered a blight on the honor of our species.


I would wager that more jobs accelerate psychological and physiological issues than, say, volunteering or unemployment with active community engagement do. At the very least, the psychological benefits of unemployment are objectively an incidental side-effect of its actual purpose, which is labor for a profitable enterprise. That is to say that employment is still "functional" if it generates that labor even while destroying someone's psychological health. If that health is paramount, the structure of employment probably needs to change in order to privilege health over productivity, even to productivity's detriment. Otherwise, the vast majority of people would be better off with some other institution.


This viewpoint seems to be at odds with the well documented human phenomenon of "retirement"


Your viewpoint is at odds with the well documented human phenomenon called "Retirement Syndrome".

> when almost everyone was employed in agriculture

Employment was a product of the industrial revolution. In the age when most everyone worked in agriculture, they owned the farm operation.

We didn't tax the tractor to bail out failing small businesses then, and I strongly suspect there is no will to tax AI to bail out failing small businesses that might succumb to AI today either. The population generally doesn't like small businesses and is more than happy to see them fail.


> Should we have started taxing tractors?

Tractors are taxed in Montana. We have a "business equipment tax" that works roughly like the tax on cars, but applies to assets that don't drive on the public highway such as tractors and other machinery. Republicans have waged a decades long campaign to reduce/abolish it though.


Pretty sure farmers don't buy them tax free? I'm sure they write some of the cost off, but they still foot the rest of the tax burden.


> Pretty sure farmers don't buy them tax free?

I do. Agriculture products are zero-rated. But obviously it depends on jurisdiction.


>Pretty sure farmers don't buy them tax free? [...], but they still foot the rest of the tax burden.

To clarify, this isn't about the farmer paying a "sales tax" or VAT as % of the price of buying the tractor.

The article is talking about something else: paying additional machine taxes to cover the loss of unemployed crop workers that would have been paying individual income taxes.


Oh I get it, but I do find it silly, because that only means that the company running the models pay more in taxes for providing you with a service, which is weird to me. Especially if they keep costs down on goods and services, allowing us to focus on quality of output more. At least that's what Claude Code has done for my side projects.


We did in a way. Tractors help produce more goods. Those goods incur VAT at the point of sale to consumers.


They don't "help produce more goods". They "reduce the need for human labor", enabling fewer people to produce as much as before.

That's exactly what AI is doing.


These two waves of automation are fundamentally different and shouldn’t be compared.

We got lucky that when farming was being mechanized, it happened slowly and while manufacturing was still growing and could soak up the labor. When manufacturing was offshored/automated, we got less lucky and a lot of people faced a massive drop in quality of life as they lost their high paying jobs and couldn’t find equivalent ones in the service sector.

Now we’re seeing a potential massive job displacement, the force doing the displacing can likely also do many of the new jobs that may arise, and the change is happening faster than any ever before.

Capitalism doesn’t promise to create new jobs when old ones are automated, we’ve just gotten lucky in the past, and our luck has run out.


We didn’t do X before therefore we shouldn’t do X today under very different circumstances is not a good argument.

You’ve already stated what circumstances are different now.


I liked how you could buy an official Roomba spare parts kit, though.

In a mechanical device meant for messy places, parts necessarily wear out quicker than in most electronics, and being able to buy and swap out the parts easily seemed like a nice feature.


We moved this year and couldn't get my old(ish) Roomba i5 to work in my new wifi easily. I've been meaning to debug the problem further, but if it can be confirmed that it's an iRobot issue and there's nothing I can do, it would save some effort.

It sucks, though, that I can use my fucking vacuum cleaner because a remote server of the manufacturer has decayed. Does anyone know if there are robotic vacuums that work fully locally without remote servers?


I can't give 100% confirmation, but it was working one day and not the next, with no changes to my network along the way.

Yes, it is an absolutely infuriating state of affairs and one could claim we were naive to not see this coming. Needing to be this cynical is the root of crisis of trust. The only thing we can rely on is that everything is a race to the bottom.

That being said, there aren't many commercial offline robot vacuums. I bought a secondhand roborock unit that is on the approved list put out by valetudo. I got one that required some disassembly to flash, which maybe lowers the market price. It's been working great and the home assistant hooks are working. There isn't a company on the planet that is in between me and my robot vacuum now.


I have an i3 controlled by Home Assistant, it is on an "IoT" network without access to the Internet. Works like a charm. The integration allows to start, stop and view information like battery level, area cleaned, issues, etc. No mapping though.

The only caveat is that to associate it with a WiFi network, the legacy app is required. So if the app is pulled from the app stores, it may not be able to connect again after a factory reset. I don't think the pairing requires access to the Internet but it uses a bluetooth protocol that I don't think anyone reverse engineered yet.

Edit: I vaguely remember that mine also stopped working a year or so ago. I factory reset it, re-paired it and it has been working well so far.


> So if enacting these anti-consumer practices were actually more profitable, why is Epic doing so shit?

Because there's a huge network effect in play here and Valve was first in the market.


That doesnt explain their surge in growth only in recent years, its not like gaming is new. No, its all the new features they are offering and goodwill they have engendered.


One of the characteristic of network effects is that you see growth, simply by virtue of being the first/biggest.


1. I work in finance and here people sometimes write math using words as variable names. I can tell you it gets extremely cumbersome to do any significant amount of formula manipulation or writing with this notation. Keep in mind that pen and paper are still pretty much universally used in actual mathematical work and writing full words takes a lot of time compared to single Greek letters.

Large part of math notation is to compress the writing so that you can actually fit a full equation in your vision.

Also, something like what you want already exists, see e.g. Lean: https://lean-lang.org/doc/reference/latest/. It is used to write math for the purpose of automatically proving theorems. No-one wants to use this for actually studying math or manually proving theorems, because it looks horrible compared to conventional mathematics notation (as long as you are used to the conventional notation).


Valve is sort of like modern Bell Labs for software. It has almost-monopoly on PC game sales, which results in massive profits. Then it uses part of these profits for public good on projects that are at best tangentially related to their actual business.


If you check out the man's social media, the hilarious part is that he is not even trying to sugar coat it in anyway. It's like: "yup, I'm ruining your internet, what are you gonna do about it?"


Check out the guy's social media (links in the original article). The shit is downright hilarious. He's very self-conscious about how horrible his business is.


I just started watching Pluribus, so this one got a laugh out of me:

https://x.com/nasaoks/status/1995382466237108317?s=20


In many cultures, there is/was also the idea of cyclical history. Things don't go forward or backward, they just repeat themselves in slightly different ways infinitely.


It reminds me of Vernor Vinge's Zones of Thought trilogy, especially the observation the traders make in the second book that all planet-bound civilizations are doomed to collapse at some point. They are usually able to restore technological progress more quickly the more records they have, but without leaving the planet are still doomed to repeat the cycle. IIRC there is even more-or-less standardized "uplift" protocols - series of technological reveals for less-developed civilizations to rapidly advance/restore their capabilities.

I wonder if there is academic study comparing past-focused, future-focused, and cyclical views of human progress in literature.


"Collapse" is maybe hyperbole in this case, if it's building on our own history to extrapolate forward. For us, certain societies have collapsed, and with them have been lost certain practices or technologies, but human civilization as a whole has been largely steady or growing since the agricultural revolution (using population size as a heuristic). There's always the threat of ecological collapse, but that's something that has only happened a few times in the history of life on the planet, and we haven't really faced anything like it before at civilization-wide scale. There's always been another group to move in and take up the abandoned land. Without some major technological breakthroughs, yes, we're likely to face a collapse eventually, but as a biosphere, not merely a civilization. Short of that, people seem to keep on keeping on.

I think the mistake comes from something common to a lot of sci-fi, which is mistaking the scale of a planetary setting. It takes a lot of energy to disrupt life on a global scale (we're managing it, but it's taken hundreds of years). "At some point" is carrying a lot of weight in that observation.


> "Collapse" is maybe hyperbole in this case, if it's building on our own history to extrapolate forward.

In the story, "at some point" generally involved technologies we are currently incapable of; the greater technology actually facilitating the greater collapse. Which at the most obvious included nuclear catastrophe.


I think my contention is with "collapse" rather than something like "crash". The latter implies a cyclical downswing (reasonable), the former implies the absolute end of a cycle through the non-viability of the prior order. One means "start the round/match over," one means "find some wood to start carving a new board and game pieces". The new game probably won't be recognizable, and you're talking about not just the events but the setting and context being unfamiliar. Every civilization goes through that? I suppose, but only because every life-bearing planet goes through that, civilization-bearing or not.


> always been another group to move in and take up the abandoned land

Completely agree with your points, but I think it’s worth mentioning that the collapsing populations may not have been aware of this depending on their level of isolation and cultural view on outsiders.


Sounds like Niven and Pournell's Moties civilization cycle from "The Mote in God's Eye"


Excellent books.


Though isn't progress inherent in that knowledge tends to increase over time? What's useful tends to get passed on to future generations, so there is an inherent advantage compared to earlier generations. Of course it's not perfect (as sometimes things get forgotten) and just knowledge/skills don't always translate to increase in living standards or productivity or well being, but by and large, in the long run, this should be true?


“ all this has happened before. All this shall happen again.”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: