Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Studying human systems is one of the best ways of studying complex systems and systems engineering, which are already crucial for complex engineering projects, like developing a complex AI.

What does it mean to study a generic 'complex system' and 'systems engineering' and what does this have to do with estimating the potential risks and dangers?

> we will have to study how basic, but constantly developing, AI integrates and plays off of human social systems.

This presumes you already know all about what the AI will be and is putting the cart before the horse.

> We have to gather quantified data about how two distinct forms of intelligence interact and what, if any, conclusions can be generalized to a future where humans are no longer the species with the highest intelligence.

Consider an aborigine making this argument: 'we have observed their firearms and firewater, and know there are many unknowns about these white men in their large canoes; if we look at their capabilities, our best analyses and research and extrapolations certainly suggest they could be a serious threat to us, but we must reserve judgement and quantify data about how our forms of intelligence will interact with theirs'.

> You have no data about how real AI's would behave in our society except for fiction, which contains no more guidance now than the Bible did for 16th century astrophysics

Really? We know nothing about AI and our best guesses are literally as good as random tribal superstitions?

> We have no consistent models that explain our own intelligence, let alone an artificial one that has yet to exist.

Someone tell the psychologists and the field of AI they have learned nothing at all which could possibly inform our attempts to understand these issues.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: