Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mean, has a bit of weight when a person who won a Nobel prize in meteorology says it. At least worth giving the sky a glance once in a while to make sure you're safe


If only climate scientists were given .01% of the credence these buffoons get.

edit: Henry Kissinger has a Nobel Peace Prize. If the Nobel committee ever corrects that error and makes the world safe for political satire again, I might start giving a shit who has a medal.


I think it's hard to get people to give credence to experts whose claims can be demonstrated to be outlandish. E.g. James Anderson, famously known for helping to discover and mitigate the Antarctic ozone holes in the late 20th century, said in 2018 that the chance there will be any permanent ice left in the Arctic by 2022 is "essentially zero"[0].

Yet a NASA site reports that in September 2022 (when the most recent measurement was taken), the Arctic sea ice minimum extent was ~4.67 million square kilometers. [1]

To be very explicit: I'm not saying that climate change doesn't exist. I'm not saying that Arctic sea ice is not diminishing (the NASA site says it's diminishing at ~12% per decade). I'm not saying that the Nobel prize is a good indicator of expertise.

I'm saying specifically that I believe it's more difficult to convince people to trust a source making claims of negative consequences when those consequences are less bad than the source says.

An analogy I might use is drugs (specifically in the US). I've heard a few people, who went through an anti-drug education program forced on them in their adolescence by parents/teachers, mention that marijuana was portrayed as just as bad as other, harder drugs. Then, when they went on in high school and college to smoke weed and discovered that they did not ruin their lives by getting stoned a few times a week, or even every day, they subsequently gave less credence to what the anti-drug advocates were saying.

[0]: https://www.forbes.com/sites/jeffmcmahon/2018/01/15/carbon-p...

[1]: https://climate.nasa.gov/vital-signs/arctic-sea-ice/


So basically, the original article is about an AI huckster amping the FUD in order to push through some sort of corporate control of the technology, using fear of the bullshit they're spinning as the justification.

I brought in the analogy of Chicken Little, inflating the scope of a threat to one of apocalyptic proportions, which is exactly what is taking place here.

First responder to me brought in the climate analogy, presumably as a means of getting me to think that maybe I'm the fool here by ignoring the real scientist who, hey, has a Nobel Prize! Or at least, the theoretical meteorologist in his metaphor does, and therefore maybe me, with no Nobel Prize, should just be respectful of the expert here.

I responded by pointing out that the Nobel committee are morons who gave a Peace prize to one of the worst war criminals of the 20th century and pressed the fact that we have thirty years+ of scientific concensus about climate change, along with a lot of corporate funded think tank noise that is running ideological interference, successfully so far. But you can only fool people for so long, it caught up to the tobacco industry and it will catch up to us.

The idiots amping up the FUD to seize control and the assholes pumping money into think tanks that generate endless climate denialist noise are the same people.


> If only climate scientists were given .01% of the credence these buffoons get.

Climate scientists are extrapolating complex models of a system that's literally planet-sized, with lot of chaotic elements, and where it takes decades before you can distinguish real changes from noise. And, while their extrapolations are plausible, many of them are highly sensitive to parameters we only have approximate measurements for. Finally, the threat itself is quite abstract, multifaceted, and set to play out over decades - as are any mitigation methods proposed.

The "buffoons", on the other hand, are extrapolating from well-established mathematical and CS theorems, using clear logic and common sense, both of which point to the same conclusions. Moreover, the last few years - and especially last few months - provide ample and direct evidence their overall extrapolations are on point. The threat itself is rather easy to imagine, even if through anthropomorphism, and set to play out near-instantly. There are no known workable solutions.

It's not hard to see why the latter group has it easier - at least now. A year ago, it's them who got .01% of the credence climate people got. But now, they have a proof of concept, and one that everyone can play with, for free, to see that it's real.


The buffoons are definitely talking about a scary beast that's easy to imagine, because we've been watching it in Terminator movies for forty years. But this is not that beast.

The beast here is humanity, and capitalism, by which I mean, the idea that you can collect money without working and that that is at all ethically permissible. The threat of AI is what is happening with kids' books on the Kindle platform, where a deluge of ChatGPT-generated kids' books are gaming the algorithm and filling kids' heads with the inane doggerel that this thing spits out and which people seem to believe passes for "writing".

And people keep saying how amazing the writing is. Show me some writing by an AI that a kindergartener couldn't do better. What they do is not writing, it's a simulacrum of the form of a story but there is nothing in it that constitutes art, just an assemblage of plagiarized structures and sequences. A mad-lib.

Everyone is freaking out, and the people who should be calming folks down and pushing for a rational distribution of this new tool which will be extremely useful for some things, eventually, are abdicating their responsibility in hopes of lots of money in their bank account.

When silent movies came out, there were people who freaked out and couldn't handle seeing pictures move, even though, the pictures weren't actually moving. It was an illusion of movement. This is an illusion of AI, it's just a parlor trick, like a victorian seance where you grandpa banged on the table. Scary, because they set the whole scenario up so you would only look at the stuff they wanted you to see. We still spend months assembling a single shot of a movie, and even if AI starts doing some of that work, all that work still has to happen; the pictures still don't move. A hundred years from now, what you're all freaking out about still won't be intelligent.

I do agree this is a world-changing technology, but not in the way they're telling you it is, and the only body I see which is approaching this with even a shred of rational thinking is the EU parliament. The danger is what people will do with it, the fact is it's out and it's not going back in the bottle.

We don't solve this by building a moat around a private corporation and attempt to pitchform all the AI into the castle. One requires two things to use this technology: A bit of Python, and a LOT of compute capacity. The first is the actual hard part. The second is in theory easier for a capitalist to muster, but we can get it in other ways, without handing control of our society to private equity. It's time we get straight on that.

The AI apocalypse only happens if we cling to capitalism as the organizing principle of our society. What this is definitely going to kill is capitalism, because capitalists are already using it to take huge bites of the meat on our limbs. Ever seen a baboon eat lunch? That's us right now, the baboon's lunch. As long as we tolerate this idea that people who have money should be able to do whatever they want, yes, AI will kill us (edit: because it works for free, however absurdly badly).

How many submarines, how many Martin Shreklis, before we recognize the real threat?


He negotiated a cease fire in Vietnam war.

That was peace. His other activities in life were not peace prize winning.

That award was less satirical than Obama getting a Nobel Peace Prize just for winning an election.


He negotiated a ceasefire in the war he fomented. Are you hearing yourself?


Same mentality that doesn't give climate scientists credence...


Yah, being cynical about a giant corporation inflating the scope of a new parlor trick in an attempt to establish a legal moat is exactly the same as ignoring over thirty years of scientific concensus against a torrent of tobacco-industry-style denialism to keep line going up.


A giant corporation? You know that Hinton doesn't work for Google, and Bengio, the most cited computer scientist of all time, is saying the same thing?

Have a look at https://www.safe.ai/statement-on-ai-risk and keep the check on just "AI Scientists"...

Too lazy? There are over 100 CS professors and scientists

Plus neither of the CEO's of Microsoft or Google are on there

It's the corporate camp, companies and investors, that are gung-ho about pushing capabilities immediately because there's big $$ in their eyes. You're the one falling for the safety denialism pushed by corporate interests, a la tobacco


> A giant corporation? You know that Hinton doesn't work for Google

U of T is a giant corporation in its own right.

> Too lazy? There are over 100 CS professors and scientists

And also Grimes. But what do these particular experts really know about humans and what vulnerabilities they have? This isn't a computer science problem. Being an expert in something doesn't make you an expert in everything.


So what's the plan for putting it back in the bottle? llama is already out there, the chatbots are already out there.

I think the solution is that government should standup a bunch of compute farms and give all citizens equal access to the pool, and the FOSS community should develop all tools for it, out in the open where everyone can see.


There isn't a feasible plan, we're at the "sounding the alarm part." Unfortunately, we're still there because most people don't even acknowledge the possible danger. We can't get to a feasible plan until people actually agree there's a danger. Climate change is 1 step past, that there is a danger, but not a feasible plan.

However, your solution is first day naivety for the problems machine intelligence poses to us. It's akin to saying everybody should have powerful mini-nukes so that they can defend themselves.


I would counter that

a) what is currently being touted as AI is neither artificial, nor is it intelligent. This is a bunch of hucksters saying "we made a scary demon with powers and now we're scared it's going to kill us all!" but in fact it's just a plagiarism machine, a stochastic parrot. Yes, it will get more useful as time goes on, but the main blockade is always going to be access to compute capacity, and the only viable solution to that is a socialist approach to all data processing infrastructure.

b) even if we stipulate that there is a scary daemon that could consume us all (and meanwhile teach me linear algebra and C++), and we transform that into pocket nukes as a more terrifying metaphor cause why not, your solution seems to be to pretend that your mini-nukes cannot be assembled from parts at hand by anyone who knows a bit of Python.

They can.


Andrew Ng doesn't agree but that is a boring story.

It seems to me the big names in AI research our having a moment that appeals to their vanity. Easy to get your name in the headlines by out dooming the next guy.

There is also no downside to these predictions about AI eating us since when they are totally wrong you can just counter that they haven't ate us, yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: