Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only for those mundane senses of alignment where we say "This system is reliable in tasks that look like X and unreliable in tasks that look like Y, so let's craft hard boundaries to avoid naive use for Y"

But it's skeptical of the other sense alignment, where a potential Master Strategist needs to be trained or crippled before it outsmarts us. It sees that perspective as comparable to logicians debating whether we might live in the domain of a benevolent or evil omnipotence: "if an ant is more powerful than a rock, and I'm more powerful than an ant, then perhaps there is something so powerful that it encompasses all opportunities to influence the universe including the power to hide itself from me." -- which comes from taking a concrete measure, assuming that it's an independent variable, and then inductively extending it to an infinite or otherwise unevidenced scale. This technique is undisprovable and so it's easy for "rational" people to mine work from it for a very long time, but history and analysis give room for skeptics to be like "WTF you going on about; let's have some tea"



" but history and analysis give room for skeptics to be like

When the skeptic is correct. The problem with skeptics is when incorrectness is not terminal, they can't hear you over the sound of pushing the goal posts farther to give a reasonable rebuttal for their originally incorrect statements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: