Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Singularity is Near: How Kurzweil's Predictions Are Faring (2017) (antropy.co.uk)
69 points by segfaultbuserr on Oct 16, 2020 | hide | past | favorite | 120 comments


Tangentially, I think "what didn't happen" in the last decades is an interesting question.

Most of Kurzweil's predictions have two parts, implied or explicit. One is the technology itself. The second is how that technology affects the world. Between the two (1.5) is a lot of ambiguity, whenever anyone tries to "score" Ray.

EG Displays will be built into our eyeglasses. Tech that essentially exists but hasn't yet become widespread or impactful yet. Sometimes tech does become widespread & impactful, but we still predict impacts wrong.

PCs have been on every desk for 25 years. But, administration (to the extent that measurement is possible) seems to have gotten less efficient. HR departments, schools, etc need more administrators today than before PCs. Weird considering PCs are paperwork efficiency machines.

Meanwhile, dating and social life have been radically transformed by personal computing.

A lecture hall, library and university are notably inferior (eg you can pause the lecture) when digitised, yet Harvard is still Harvard. Even a local college is still the local college. Universities are persistent despite their most legible reasons raison d'etres becoming nonessentials.

When I was in college, "Keynes' 15 hr workweek prediction for the 80s" really fascinated me. He was right, on paper, in his efficiency/growth assumptions. I felt that the professor's explanation for why the prediction failed (people decided to keep working lots and buy more stuff instead of chillin) totally unsatisfactory.

"Why history didn't work out this way" seemed a much better basis for understanding the past than post fact theorizing of traditional economics. Lets take theory and its predictions and see why it failed. Often, I think the projected premises turn out correct, but their impacts do not. Demonstrates what we do and don't have a grasp on.


> I felt that the professor's explanation for why the prediction failed (people decided to keep working lots and buy more stuff instead of chillin) totally unsatisfactory.

I disagree with this conclusion. I, for one, never decided to work more, I simply don't have a real choice. No one will hire a 15hr/wk person except in positions that don't pay enough per hour to make a living wage anyway. If I could cut my current salary to 37.5% to only work 15hr/wk I'd do it in a heartbeat.


I agree. That means we both disagree with what was (at least circa 2003), a fairly standard economist's perspective.

In a nutshell, Keynes made assumptions about economic growth (correct). He then riffed on assumptions for income effects (if I pay you more, will you work more?) and substitution effects (if you have more money, will you work less?). Those are not very hard to populate. Ask poor people, observe higher income people's behaviours.

The standard retort my professor gave (people decided to keep working lots and buy more stuff instead of chillin)is, IMO, the favoured explanation because it's parsimonious with theory. It fails when it meets reality, in the form of people. Ask a median person why they don't chill more and work less and you'll get strange looks.

If interested, I think concepts in the "cost of thriving index^" are probably key to understanding why economists are (I agree) mostly wrong on this.

In any case, I think the keynes prediction, is an invaluable starting point... being from the past. A modern economist might defend the theory, with the "chill more" defense. I doubt they would bring it up though. No one is going to make a false prediction about the past to test a theory they support.

^ https://media4.manhattan-institute.org/sites/default/files/t...


That gets into politics, history, sociology, how the subject of economics itself interacts with politics and affects history. Best to move on quickly and discourage the students from thinking about it.


> If I could cut my current salary to 37.5% to only work 15hr/wk I'd do it in a heartbeat.

this raises another problem that keynes may not have forseen (not sure when he made this prediction). people don't necessarily realize how much it costs to employ them beyond just their salary. stuff like health insurance, software licenses, and physical equipment/space doesn't scale with the hours worked by an employee (at least, not linearly). 37.5% of pay for 37.5% of the normal work week might sound good to you, but sounds pretty bad to your employer. they get 37.5% of the working hours for you, but pay well over 37.5% of the original cost. even if given the choice, I highly doubt most people would accept a superlinear pay cut in exchange for decreased hours.


You should take a mba class or open a business.

You will see those one time/fee costs do not scale, but to the benefit of the employer.


Same!

Microsoft may be making this more available... https://www.theverge.com/2020/10/9/21508964/microsoft-remote...

> Flexible working hours will also be available without manager approval, and employees can also request part-time work hours through their managers.


become a freelancer. i work maybe 20 hrs per week max. and still earn more than i did as director of 30 people

just get your sales in order and fill a niche


Can I ask what your niche is?


sure. deep learning for crowd monitoring and custom IoT solutions


That is fairly specialized!

It sounds like you're in a great position.

"Getting one's sales in order" is by far the hardest part, though. I have no idea how to do that.


its hard in the beginning, true. make sure to tell EVERYONE what you are doing. most early opportunities came through people who knew people who knew people with money and problems. you d be surprised how many companies look for help but dont really advertise it.

after a year or so, new jobs will begin to just flow in as you will have built creds. then you essentially can pick and choose.

also learn to think like your buyers. help them to win. dont sell, cooperate. be a partner.


The option to quit and only work either every other year or every 6 months is more viable. Though still job/industry specific, many people pull this off.


Yeah, because a resume with that work history is going to go over so well?


I hear this a lot but is it really a thing? It’s so normal to have gap years now and I feel the vast majority of employers would be understanding. I certainly wouldn’t find it strange that an applicant had gaps in their CV.


Seasonal jobs or contracting work etc can cover up such things.


You can always be eccentric, go off script and life hack Keynes' predictions into existence. However, people mostly don't and economics is about what people do.


I think that’s overly dismissive as many people do work like this.

It’s up to the individual if doing animation work in 6 month bursts for individual movies is enough or if they want near full time employment. 6 month rotations are common in many tiny industries like super yacht crews. Major industries like education, farming, construction, and tourism are quite seasonal and it’s up to individuals if they take that time off or work an off season job.


For those industries you listed the same would have been true in the 1930s when the prediction was made so I don’t think Keynes was referring to those. In all likelihood he was talking about manufacturing and professional jobs. The divergence seems to be that the blue collar workers were fired and/or their jobs offshored while not much has changed for the white collar workers except for the digitization of paperwork and the elimination of support roles with accompanying responsibilities now transferred to the professional. I think we screwed the pooch on this one and we should own up to it.


Or work 7 years and then retire. https://www.mrmoneymustache.com/


> people decided to keep working lots and buy more stuff instead of chillin

That's not why people don't work 15 hour work weeks, it's because we're forced to due to artificially induced scarcity of essentials like housing, along with a competitive labor market where employers generally demand 40+ hour work weeks and can get away with it because they have the leverage.


Plus laws keep people from organizing in various ways (secondary strikes are illegal) and not in others (combining two companies to cooperate together is generally ok, even if it would be illegal for them to cooperate in certain ways if they were separate). Contractors can't effectively organise in some situations.


Also, they'll totally get flak if they systematically hire part timers. See current Uber/Lyft woes regarding contractors/employees. If the government gets its way, and it may, these companies will eventually go "you're only worth your benefits if you work 40 hours".


> administration (to the extent that measurement is possible) seems to have gotten less efficient.

This is an interesting "problem". After all we now have more ways to manage data, so we generate more data. And usually this means more administration. The question is, are we getting more out of it?

Also, we see what reduced administration looks like. Every webshop, every online platform where you just click something and you can buy/sell/report/ban.

Aaand, there's the incumbent lobby doing regulatory capture to resist de-administration, eg. filing taxes.

> Harvard is still Harvard.

It took this pandemic to show the world how fragile that status quo is. Plus the recent admission discrimination lawsuits and scandals show that it isn't really about education.


>>The question is, are we getting more out of it?

This really is the question, quite literally, in economics. That's basically what economics studies.

The outputs of a car factory are easy to measure. How many cars came out? We can quibble over quality, but quantity is most of the game and that is measurable. Administration is harder to measure.

David Graeber, in his provocateur style, concluded that they are more efficient in the sense that they produce a lot more paperwork... which is their output.

Economists would consider this nonsensical as paperwork itself doesn't benefit anyone and can't be considered an output. He wasn't an economist though.


> Economists would consider this nonsensical as paperwork itself doesn't benefit anyone and can't be considered an output. He wasn't an economist though.

Or they would observe that wages are paid to produce it and estimate it's worth based on the total wages and total pages of paperwork. It's a very blind method, but it makes a certain sort of sense.


..effectively they do do this. The baseline assumption is generally that "prices are correct" and that the value of such paperwork is worth more than the salary to whoever pays for it.

When theorizing though, economists don't consider the paperwork itself an output. If that administrator works for a car company, it's considered to be a contribution towards outputting cars.


To be honest I generally trust anthropologists to deliver more insight at the more human level that administrators work at, economic theory makes most sense only in the special circumstances where a large group of people are trying to be coldly calculating within a market setting.

Most people just try to escape bureaucracy, so good empirical measurement is about all I currently hope to get from economists.


Economics, like a lot of "social sciences" tends to get caught up in hammer-nail problems. But uniquely, economics is highly influential on politics and was at the heart of cold war philosophical battles between the US & USSR. This hardened heads.


Different aspects. It's quite possible that most jobs are low-value bullshit (as observed by anthropologists) while at the same time they have non-zero contribution to the economy, because there's a market for whatever output they happen to produce (as observed by economists).

And both seem to be very much true. After all the economy is not just made of cars, but there are paintings, and festivals, and there are people who pay to travel to other places just to see that place with their own eyes, and it's also charity, and it's also insurance, finance, and other meta-services, and whatever humans prefer more than keeping their money in their wallet (or in ETFs). Add to all this the good old agent-principal problem, and we can quickly see why there are bullshit jobs. Because lot of decision makers (especially when the responsibility is diffused) opt to organize work in a way that helps themselves by helping those who are close to them. (Eg. nepotism, cronyism, but also the watered down versions of them too, when someone just gets a bullshit job.)

Furthermore, it's a testament to how much economic surplus is produced nowadays that so many relatively low-output jobs can be sustained.

Aaand, there's also one other important point considering these seemingly do-nothing paper-pusher jobs. They are (knowledge and know-how) buffers for organizations against fluctuation. They provide stable (hand off) points for more productive more fast moving folks. (Literally, if someone changes positions inside an or between organization(s) they need someone to tell them about the boring details. And since reading all the thousands of documents produced each year is very useless, it's economical to have people keep stuff in their head.)

Could it be done better? Sure. That's what startups try to do, to "disrupt" the old inefficient network of work. But as they grow they too face the same problems (though of course usually on a different level).


>PCs have been on every desk for 25 years.

Actually, this part is regressed back these days. Invention of the smartphone makes many people not to have a computer with efficient input devices: mouse and keyboard. Although smartphone is handy, it makes people dumb. Smartphone makes people a mere consumer of the art, not producing it. Because smartphone is inefficient at input devices but people don't much care about it.

Yes. Majority of the people are consumers but if not for the smartphone, we could have potentially more producers of the art. It's so unfortunate future I didn't expect 25 years ago.


It turns out you cannot simply network computers together - you are connecting them to the existing network, the people network. A non-networked machine is not wired to people. Once machines got wired into the people network, software models needed to be updated to reflect people issues (trust, spam, crime)


Solid point.

This comment really demonstrates why certain kinds of wrong predictions are very valuable.

Extremely relevant as we move into our second attempt at making WFH work.


keynes failed to see that competition is relative- that is, how hard you have to work is relative to the people you’re competing with. so where did all the extra value go? it accumulates to any place that is inadequately checked by competition: monopolies


The textbook explanation for why Keynes' workweek prediction failed is that he failed to account for the shift to a service economy, which provides ample room for people to expand what they work on and what they spend on.

The crisis within that explanation is that the number of people employed in the "service" category has not grown proportionately with gains in productivity. In fact, it's barely grown at all. The type of job that has filled the gap is "administrative/clerical."

This observation has led David Graeber to propose that perhaps Keynes was essentially right, and that the only reason we don't have the shorter work week is basically that we refuse to let it happen. We would rather continue "working" whatever number of hours was customary in our culture (40 hours a week in the US, less in Europe, many more in Asia) and if there's not enough to actually do (there isn't) we have a small number of people do the work for 40 hours and the rest are just managers and administrators sitting around in a redundant meeting or staring at a screen waiting for the next redundant meeting.


What got me thinking of this prediction was actually Graeber, and Tyler Cowen. Both noted some evident facts about productivity that I hadn't heard mentioned often.

Stuff that was simply "out of theory" and irrelevant to it despite seeming logically contradictory to theory. The observations about efficiency in administration in the digital age is particularly striking.

I do think Graeber tended to overstate conclusions. I actually think we understand inflation wrong, and that this becomes relevant over long time scales. Some productivity trends just overestimate reality. We haven't really become as wealthy and efficient as the measures suggest.

Another problem is non fungibility. Not owning a smartphone in 1970 is different from not owning one in 2020 in ways that we can't be quantified in dollars. You can't buy a 1960 education or 1990 healthcare. It isn't sold, and it wouldn't have the same value even if sold. It's way too complicated to dismiss with a "our expectations have gone up."


To an extent we can compare with less developed countries where it's still possible to buy the equivalent of a 1960 education or 1990 healthcare.


Monopoly billionaire CEOs worked 100 hours a week during periods of intense competition (pre monopoly) not 40. The extent to which one is not forced to work 100 hours a week is the extent to which competition is not yet very fierce. As discussed in book Moral Mazes, even if there is no work to do, middle managers still must compete for scarce executive positions and thus project an appearance of long hours.


If that hypothesis was correct then why haven't smart CEOs fired all the redundant workers? They certainly have ample incentive to do so, and in the US at least there's nothing to stop them from downsizing if they wanted to.


My guess is that there is political power in employing a lot of people.


If this is Keyne's failing (I agree it's a part of it) then the majority of economists still make this mistake.

Where they do acknowledge it, it's usually via frivolous examples. Your friends have cool cars, so you need one too.


Another angle is as discussed in Mythical Man Month - due to O(n^2) growth of communication overhead, the optimal team size is 2. 2 people working 100 hours a week >>> 5 people working 40 hours a week, as is obvious in any early stage startup.


On the 15 hour work week front: I think Keynes and other economists missed social trends here. What they didn’t predict was a culture that glorifies being busy. Look at the blogs from entrepreneurs in the past decade bragging about their 60 hour work weeks.

To be clear, the vast majority of these people are lying, either to us or themselves. What’s important is not what they actually work, but the fact that claiming to objectively overwork is something that they feel is socially necessary.


> What they didn’t predict was a culture that glorifies being busy.

I'd argue the culture glorifies being busy in part because the folks benefitting the most from the current arrangement want it to.

And also because of the whole America/Protestant work ethic thing, which isn't all wrong but ...


why the prediction failed (people decided to keep working lots and buy more stuff instead of chillin) totally unsatisfactory.

Why not?


Why unsatisfactory?

Essentially because I think Keynes (and most economists of his era) were right in their basic assumptions. If you made the average person in the 1920 10X richer, they'd have behaved much like the "leisure class" of the time.

If you ask the average person "why don't you work 65% less and chill?" most people would look at you funny. It's not a choice. I have rent, etc. A solution that relies exclusively on choice fails the smell test.

I have my own ideas about the solution, but that's besides the point. My point was that Keynes, not knowing the future, gave a more intellectually honest projection of what the theories predict than any post facto economist would have.

note: no reason to downvote a respectful, on topic comment. Asking "why not" is the whole point of discussions.


"why don't you work 65% less and chill?" most people would look at you funny. It's not a choice. I have rent, etc.

I see what you're saying, but at the same time, at what annual revenue level do humans actually start "chilling" more and work less? 500 thousand? I'd be curious to see statistics.

Where I live (Canadian city), a lot of parents are working less, through a variety of arrangements (stay-at-home mom, consulting, part-time work, parental leave). Some of them have pretty low incomes. Society doesn't make it exactly easy, but it's certainly possible, it's just not yet the mainstream option. Social safety nets obviously help.

Wikipedia has this nice chart: https://upload.wikimedia.org/wikipedia/commons/3/3b/Heures_t... . Some countries have seen impressive reductions.

Anyway, I definitely agree with that:

A solution that relies exclusively on choice fails the smell test.

Wealth inequality, etc...


IDK really. Income correlates to a lot of stuff though, and that makes measuring complicated. A lot of high earning jobs are also intellectually stimulating, high status, high power, etc. Those are the jobs people may do regardless.

For his part, I believe Keynes asked (poor) people at what salary they would work less and also observed the "leisure class" of his day.

For my part, I think the "cost of thriving index" is a good starting point. For most people it's $X plus the amount needed to they buy a house, car, pay tuitions, healthcare, etc.. If you add up all these "basic" expenses, the "cost of thriving" has far outpaced official inflation and even median salary.


> I felt that the professor's explanation for why the prediction failed (people decided to keep working lots and buy more stuff instead of chillin) totally unsatisfactory.

If we accepted Keynes' era's quality and standard of care for healthcare, entertainment options, transportation options, clothing options (almost wear the same clothes every day), appliance options, food options, and real estate constraints (for most young people who do not already have pledge-able assets today, that means rent or cash-over-the-barrel-no-mortgages), I can see a 15 hour work week. There are very few people who willingly accept those limits and eschew modern improvements. Keynes to my knowledge never said 15 hours with improved technological benefits in all areas of life [1], only mostly the technological improvements that brought about economic productivity enhancements.


I really like this idea of 'what history didn't happen'. It's not 'alternative history', in the literary fiction sense, but more ... academic? Granted, the fictional sense is more compelling to read.


I think it's partly human behaviour - people like doing things rather than being unemployed even if you have some cash hand out. We are awake about 112 hours a week so 40 hours = 36% and 15 hours = 13% so maybe 13% is too little and we should look to make work fun rather than less hours. I figure even if we have AI robots that can do everything people will still teach each other yoga, acting and the like.

Also people are competitive - maybe 15 hours can pay for food and shelter but then the Joneses next door work 60 and buy a Porsche.


> Keynes' 15 hr workweek prediction

I m optimistic about this , with people who work remotely. There are alot of jobs that dont require a lot more than 3 hours of real effort per day.


Yeah, a backdoor.


If you want people to be able to work 15 hours a week and have enough to make ends meet- I think most people would jump at the chance, but capitalism is not the right economic system if that’s your goal. Capitalism is a machine that maximizes returns on investment; having people only work 15 hours a week doesn’t do that.

Also, there’s a hidden assumption in there that child care doesn’t count as “work”. A child needs 16 hours of care and attention every day and no economic system can reduce that.


I'm not sure that the author's assessment that "36% accuracy rate" ... "there is no reason to doubt his date of 2029" follows.

I'm no statistician, but my intuition tells me that if you drop 1/3 accuracy every 10 years, you're shooting well under 10% by 2029, so it seems crazy to say that we're "on track" for that date. This reminds me of the insanity that some project managers use to estimate their completion dates. They say something will take 1 year. Then after 2 months they have done 1 month of work and then update their total estimate to be 13 months instead of 24. And I'd argue that its even worse for Kurzweil's prediction because in that case its an exponential growth factor thats off - not a constant factor.

Want a prediction to sell a book? Singularity by 2029. Want the truth? The singularity won't happen by 2029.

Note: I'm using 10 years here because the predictions from 2020 were not graded, and the last timeframe for graded predictions was ~2015


Kurzweil never said singularity by 2029. He said 2045. 2029 is the date he predicted for the Turing test which is a different thing and may be kind of possible judging by GPT-3.


“By 2029, computers will have human-level intelligence,” Kurzweil said in an interview at SXSW 2017.

That’s quite a bit more than TT.


Summary of the article is in this paragraph:

> This is actually quite impressive! Although 9 out of 25 is only a 36% accuracy rate, I still remember when reading The Singularity is Near for the first time that almost all of the predictions seemed wildly optimistic and sort of crazy. It seemed a bit unlikely that it would be possible to have a high speed internet connection from a touchscreen super-computer everyone has in their pocket that can also act as a personal assistant that you can speak to in natural language and it will usually understand and respond appropriately, albeit unable to have a full conversation at this point.

So his best record is in making predictions about the internet and ubiquity of portable computers (smartphones), but he's been less successful in his optimistic predictions for full-immersion virtual reality, and the jury is still out on the singularity.


I think that even this article is overly generous, marking predictions as "true" that were equally true when they were made as they are today. For example, "Most colleges will follow MIT's lead, and students will increasingly attend classes virtually" is marked as "true", which reads to me as particularly laughable in light of the enormous disruption to universities that COVID-19 has caused. Yeah, distance learning is a thing; it was a thing in the 70s too.


Yes, to take predictions seriously, we should ask the visionary for a rough timescale. And we should also ask not just them but their peers on how likely they think the prediction is to come true. It's not impressive to identify an existing trend and say that it will continue; all of Kurzweil's peers would have been able to see that. Similarly, it's not impressive to keep kicking the can down the road as a prediction eventually becomes truer, but only 50-100 years behind the timeline the visionary implied. Ultimately if the prediction doesn't follow fairly rigorous criteria, it's neither falsifiable nor impressive.


IMO Kruzweil doesn't make predictions, he makes wish lists.

It's useful to consider obvious misses - for example, he failed to predict fake news or the dystopian surveillance and behavioural influencing culture being created by ad tech.

This has obvious implications for practical AI. If the black-hat possibilities aren't considered and mitigated, experience to date suggests they'll dominate.


> he failed to predict fake news

In former days, "fake news" was simply named "propaganda" or "bad/manipulative journalism".


I think "fake news" at least started as something a bit different - propaganda sure, but of a particular form, i.e. in the form of a fabricated news event propagated through media channels as if it were true.


This is not a new thing. The yellow-press journalism of the US happened well over a century ago, and indeed you have papers which talk about Elvis and aliens, which would meet my definition of "fake news".

The only thing that changed was the distribution method (i.e. over the internet/social media).


That's true. There does seem to be some difference though - stories like Elvis Found Alive in Anchorage and Batboy Graduates with MBA were pretty much regarded as entertainment, whereas Clinton Eats Babies and Trump Videotaped Getting Peed On were received as true by large segments of both the population and the media that normally does not make up stories.


American intervention in Cuba due to newspapers absolutely fits the “most people believed it” criterion [1]:

> American newspapers fanned the flames of interest in the war by fabricating atrocities which justified intervention in a number of Spanish colonies worldwide.

[1] https://en.m.wikipedia.org/wiki/Propaganda_of_the_Spanish–Am...


Yes, but its ubiquity and political impact was vastly underestimated. We'll pay dearly for that.


Considering "fake news" (literally commandeering the media and planting untrue stories) was a tactic used by Hitler and is considered to be a basic tool of authoritarianism, I'd suggest that no one has underestimated its impact.

In fact, fake news started WWI.


> literally commandeering the media and planting untrue stories

We assumed that only happened with state-controlled media, that one would first take control of the State, before taking over the media to sediment it. We weren't prepared to deal with the Murdochs taking control of the State by controlling part of the media.


What he seems to get wrong are the adoption, the tech is there in niches and works reasonably well, but it's just not wide spread, people just don't see the value in using say VR or Google glass.


But that's fundamentally the prediction; VR tech has been available in niches since the 90s.


As William Gibson says, the future is already here. It's just not evenly distributed.


Remind me why anyone listens to Kurzweil? Anyone can read a bunch of sci-fi and make wild, scattershot extrapolations into the future, and do as well or better than he does.


It's inspiring. I consider Kurzweil, Diamandis, etc. to view the impact of technology with wildly rose-colored glasses. I view this as one possible trajectory of our future - being on the extreme positive end, whereas a very negative outcome may be akin to a "great filter" event.

I think the extreme negative or positive outcomes are both unlikely long-tail events but having a "singularity" that benefits the wellbeing of humanity is definitely worth striving for.


>benefits the wellbeing of humanity

If it even happens it will not be shared equally. A regular person will have to login with a Facebook account with the right amount of social credit to vicariously experience a part of the ascended's experience, with ads. And no, trickle down isn't real.


I definitely think a cyberpunk-ish future is more likely than a pure kurzweil outcome. That is, the increase in wellbeing of an average joe will be more linear and that of the well-off will be more exponential.


well put.


Because he wrapped a combination of nerdy sci-fi and a theory of ever-ending economic growth around an updated Christian-ish proto-religion.

A certain type of person is basically wired for it.


Everyone is wired to have a worldview. And although people think that their brains are different and have some special type of rationalism supporting their belief systems, that's not the case.

I think that the reality is that worldviews are necessarily reductionist at some point. Think about just literally looking down a road far into the distance. If you can see far enough the stuff at the end of that road just looks like a tiny dot which you can't make out.

No one can actually predict the future. Some people like me think that technology will probably shortly change things dramatically.

I would be interested to hear your worldview and predictions for 2029 and the 2030s. But just a warning, I am going to classify those predictions as beliefs.


Not true, although he seems like he is doing crazy speculations, there is a method to his madness. It is driven by all the exponential growth laws which he documented and extrapolated very well.


Exponential growth is not a law, nor is extrapolation a valid scientific technique.


They can be, when used appropriately.

To give a particular example, the exponential growth of computing power has been a close-enough match to exponential since Kurzweil became famous: It was about 20 years ago that Apple was boasting that their latest PowerMac was banned from export to Cuba because a 1 gigaflops computer counted as a weapon, now they boast their phones do 11 teraflops — compatible with 1.5 year doubling time.

(That said: while I have met someone who expected the exponential to continue past single atom transistors, I do expect it to stop soon-ish; superconducting/3D/quantum computing may replace classical transistors, but I never expect to see multiple transistors on a single atom).


An atom can’t be a transistor. That’s like saying “a single atom banana”. An atom can encode spin state, which can be used for computation.


I don’t make any claim of being a material scientist, I just wish to note that there is something with the name “single atom transistor”: https://en.m.wikipedia.org/wiki/Single-atom_transistor


Maybe an atom can be something more powerful than a transistor and be able to emulate the properties of a transistor.


It is a "law" in the same sense as the MacDonald's Rule (countries that host MacDonald's franchises don't go to war.)

It works until it doesn't.


True, and I can say the same about Hooke’s law, Newton’s law of gravity, and so on.

Extrapolation outside your dataset is only a problem if you take it too far, and how far is “too far” depends on how well you modelled reality with your dataset in the first place.


Moore's law, Wright's law may not be natural laws but have mostly proved to be consistently true observations.


Yeah, still not taking into account the energy we're using and the manpower this energy liberate.

But this energy is not infinite. Despite all innovations in solar/wind, and assuming efficient storage that does not exist yet, EU won't be able to absorb a 3% decrease in oil+gas importations by 2030. Rystad (https://www.rystadenergy.com/), pre-covid, was forecasting a 2 to 8% decrease in oil and gas import from the EU according to the think tank "the shift project" [0][1].

I won't talk about manpower much because there is still a chance that nothing more heavy than Covid will hit us in the near future, but meals will be hard to get this year in eastern Africa and while this year India did not suffer as much as in 2019, another drought like 2019 next year would be even more disastrous. For information, this winter could have been worse for Africa than 2011, luckily the wildfires and drought in russia were localized further north than a decade ago.

But well, we still have two silver bullets: fusion and renewable hydrolyse.

[0] https://theshiftproject.org/wp-content/uploads/2020/06/Study... complete data analysis

[1] https://theshiftproject.org/wp-content/uploads/2020/06/fig40... short graph.


In the real world, any period of exponential growth is part of an S-curve (coming to a plateau) or a bell curve (reaching a maximum, then decreasing) of some sort. You can't cheat with the law of physics and entropy.


I don’t think Moore’s law are laws in traditional scientific sense but more of a boardroom driven agenda that’s more of a self fulfilling prophecy


Extrapolation, as the general act of generalizing from observed events to not-yet-observed events, is necessary for making any kind of prediction about the future, and for any awareness that goes beyond direct sensory perception. I think that's within the purview of science.


"Not a valid scientific technique" is not a valid objection to a nonscientific process.


Kurzwel is making predictions. Extrapolation is like the only method of making predictions.


> Remind me why anyone listens to Kurzweil?

In general, people do not adhere to the egalitarian message board nerd ethos that all opinions are equally important. The opinions on technological progress of someone who has invented as much technology as Kurzweil carry extra weight.

(I agree that he's wrong about, you know, a ton of stuff...)


This has been done more systematically. For example, for 2009: https://www.lesswrong.com/posts/kbA6T3xpxtko36GgP/assessing-... https://www.lesswrong.com/posts/kK5rabDsKWMkup7gw/kurzweil-s...

For 2019: https://www.lesswrong.com/posts/NcGBmDEe5qXB7dFBF/assessing-...

My own takeaway: Kurzweil's self-assessments are misleading and borderline dishonest; Kurzweil is overoptimistic in general, and you could improve by adjusting by about a decade; there seems to be a split in accuracy to my eyes about hardware vs software - when Kurzweil talks about software such as natural language translation, he is often right or delayed by a relatively short time, but when he talks about hardware (and biology especially) he is strikingly wrong and often the prediction looks like it will never come to pass; and a lot of the software wins were driven by the deep learning revolution: even in 2015, you would have reasonably dismissed some of the 2019 predictions working out, but by the end of 2019...


I've always felt that Kurzweil doesn't properly appreciate the limits of his expertise.


The most interesting thing about looking back at Kurzweil's writing is the tech optimism we were all smoking back then.


Gwern has some interesting personal reflection [0] on tech optimism vs. tech pessimism.

[0] https://www.gwern.net/Mistakes#near-singularity


I agree.

It is fascinating though, to see what predictions still seem as likely (though less optimistic) when we go from a "don't be evil" perspective to the 2020 one.


Kurzweil gets criticized for his incorrect predictions, but it is hard to predict the future. If you take a look at HN users predictions from 10 years ago[0] and evaluate them strictly, then Kurzweils' predictions in comparison don't look that bad after all.

[0] https://news.ycombinator.com/item?id=1025681


My criticism of Kurzweil isn't that he gets his predictions wrong, it's that he acts like an authority with his predictions and confidence that is totally unearned. If he was just throwing out some guesses and saying "I'm probably wrong" no one would really care one way or the other.


Was there an HN thread for 2030 predictions?


There was[0]! In contrast to 2010, 2020 thread seems much more pessimistic.

[0] https://news.ycombinator.com/item?id=21941278


Every Kurzweil's prediction that came true is riding on the advances with semiconductors and networking. Even many of the true predictions are 'on the way' in the sense that they exist but the impact for the society is not there.

Only 2 of them rely on advances in machine learning (real-time translation of foreign languages, personal assistant).

Most of these are consumer tech or interaction with computer advances. Two predictions about education are nice.

All predictions about advances in biotech, medicine, nanotech, etc. have failed.


The problem with medical predictions is that everybody wants to live forever, so of course he tried to rush it a little bit compared to what's realistic, so that rejuvenation can happen before he dies.

At the same time we have a small experiment that successfully made 9 people younger using growth hormone therapy, so we know that there are targets that work in human, we just need to decrease the side effects significantly (which can sadly take 20 years realistically, as experiments are extremely slow and expensive compared to running an A/B test on a web site).


Seems to be the primary reason we age is dna damage. The first order effect is cancer, and the second order effect is that the body engages various cancer prevention strategies like senescence.

There are some other ways in which we accumulate damage, such as scar tissue, and increasing amounts of embedded foreign bodies. But not nearly as important as dna damage.

For this reason, I see crispr as the only path that could lead to immortality, and frozen stem cells as the most likely way to stall until that happens.


According to the Hallmarks of Aging paper DNA inside the nucleus is relatively stable, and the epigenetic material (i.e. DNA methylation) is a much better predictor of aging, as the proteins can't be transcribed if the DNA is methylated at those parts.

That's why is such a big thing that reversing methylation already happened in humans, as it means that the human body already knows how to regenerate itself (just like other animals), we just don't yet know how to turn on the regeneration pathways safely.


Relatively stable is not enough, as we can see from how common cancer is.


I preferred his digital piano. That was real and had a good sound quality (for its time).


The value of reading The Singularity is Near in 2005 (I read it in 2009, a little late to the game) wasn’t the predictions themselves. The way he thought about the development of technology was fundamentally different, and by reading his explanations for his prediction other people were learning how to think about the future as well. 2000 started with clunky desktops and dial up. 2009 ended with 3G iPhones. I can forgive him for being overly optimistic about future progress given the time.


A quote from the QualityLand 2.0 audiobook by Marc-Uwe Kling:

> The Singularity is always one Kurzweil in the future.

(That is a pun on Kurzweil's name. It consists of two german words that can be roughly translated to "a short amount of time").

Another quote that this may be based on: "The semantic web is the future of the web, and always will be." (Peter Norvig) Source: https://news.ycombinator.com/item?id=8510953


> Wikipedia (whose reputation in academia for being inaccurate I don't accept as fair or accurate)

Neither did I, until I started studying aculeates in detail. Only a few polistid species are even described in Wikipedia, and for the most part their pages share identical content that appears to have been copy-pasted and which, while a reasonable general description of polistid social behavior and colony cycle overall, totally fails to describe heterospecific variations which can be fairly easily found in papers which treat individual species in detail.

For a general lay overview such as an encyclopedia is meant to provide, sure, it's accurate enough. But - given that even an interested amateur like me can not only soon exhaust Wikipedia's treatment of the subject, but almost as quickly also become aware of that treatment's shortcomings - I no longer have any difficulty understanding why it's so negatively regarded as an academic reference.

On reflection, I don't understand why this should be controversial. When is any encyclopedia expected to be able to serve in this role? The genre is, as I mentioned, intended to provide a lay overview of many subjects; its treatments are thus broad and shallow by deliberate design.


Looks like the page got a hug of death. Wayback machine link:

https://web.archive.org/web/20201016100027/https://www.antro...


Vinge presents some interesting ways to measure progress toward the singularity in this talk I found very interesting:

https://www.youtube.com/watch?v=_luhhBkmVQs&feature=emb_logo


Future generations will remember Covid for having paused the Singularity indefinitely.


Quite the opposite. They'll look back at the turbocharging of remote work, the skyrocketing big tech stocks, the embrace of mobile-everything, and so on, as accelerating the progress. You certainly see no slowdown in the onslaught of DL papers at the online conferences this year.


I don't see those things part of a progression closer to the singularity.


They are. They all contribute to available capital, data, legibility of employment arrangements (when you go fully remote, you've made considerable progress towards enabling automation of job roles by intermediating it, and it forces rearrangement of organizations to be digital-first), the value of AI, and so on. A world in which everyone works via a computer, and AI-heavy big tech organizations are the top predator is not a world which has delayed a Singularity!


Alternate hypothesis. I think if you look back in a decade on a graph of say compute operations per second per dollar you won't see any visible effect from covid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: