> What if people are the real “Artificial” intelligence?
Yes, or rather: what people consider "intelligence" is becoming more and more artificial. I really believe that any sort of "singularity" involved with "AI" will be more about humans lowering themselves to the level of machines than machines raising themselves to the level of humans.
Obviously there are material goods that come from technology and I do believe that part of the human condition is a symbiotic relationship with our signs/symbols, languages, systems, and machines. In a very real sense, we always have and always will live in a singularity. However, it feels like we keep forgetting that. And keep falling deeper and deeper into these weird religious crazes that some new technology is going to fundamentally transform what it means to be human purely for the better. I'm sure people will disagree with me here, but I don't see that at all. Without denying material benefits, I think people thousands of years ago did as well or better wrestling with and answering the important questions.
If you want to see what mass AI generated content looks like, look at YouTube. Yeah, it's mostly being made by actual humans. But just barely. It's one big dance being performed for an algorithm. Especially children's videos. It's not about nurturing and developing the human soul but about a machine-like desire to optimize engagement, feeding back into the machine itself. And that machine feeds back into the stock market machine. Same with SEO. Same with the internet. Having more 100% AI generated content will just be another layer of the same.
Of course, I'm not fundamentally against any of that stuff. Personally, I would choose to keep them all but in moderation and with more introspection. Instead of saying that access to the Internet is a human right, I think freedom to live without layer after layer of the Internet imposing itself into your life is what should be held up as an ideal. But we seem to be going in the opposite direction. This is yet another new layer is selling itself as the solution to all our problems in the previous layer.
I will never accept that AI deserves anything like human rights by virtue of intelligence any more than I would accept that someone retarded doesn't deserve them. But I do see the risk for it to further dehumanize existing human rights. When half our coworkers are empty AI generated husks that disappear like a fart in the wind at the end of the work day, how will that affect how we treat the remaining human ones?
The thing that scares me most is that ethicists who talk about the deep questions seem to be more fascinated by above situation from the machine side than the human side (perhaps humans who are on the docket to be replaced by a machine are only barely human anyway /s). And the ethicists who claim to be people-focused seem to be mainly fixated on making sure that white people suffer as much as United States protected classes and that no one violates copyright law.
It seems like we're ready for another industrialization where everyone is working off an implicit assumption the ends of this process are going to justify the means and it won't be til we discover that wealthy rent seekers have been greasing the wheels of the machine with thousands of children that we realize maybe we should take a step back.
In summary, I think AI is a similar type of risk to industrialization. But the thing I worry about is that the way it is being pitched now -- like a religion or new consciousness -- is going to create and exacerbate problems with inequality and exploitation in a way that will make us feel foolish later.
[Sorry, turned into kind of a rant. Apologies if a little off-color or off-topic, but I figure I may as well post]
Yes, or rather: what people consider "intelligence" is becoming more and more artificial. I really believe that any sort of "singularity" involved with "AI" will be more about humans lowering themselves to the level of machines than machines raising themselves to the level of humans.
Obviously there are material goods that come from technology and I do believe that part of the human condition is a symbiotic relationship with our signs/symbols, languages, systems, and machines. In a very real sense, we always have and always will live in a singularity. However, it feels like we keep forgetting that. And keep falling deeper and deeper into these weird religious crazes that some new technology is going to fundamentally transform what it means to be human purely for the better. I'm sure people will disagree with me here, but I don't see that at all. Without denying material benefits, I think people thousands of years ago did as well or better wrestling with and answering the important questions.
If you want to see what mass AI generated content looks like, look at YouTube. Yeah, it's mostly being made by actual humans. But just barely. It's one big dance being performed for an algorithm. Especially children's videos. It's not about nurturing and developing the human soul but about a machine-like desire to optimize engagement, feeding back into the machine itself. And that machine feeds back into the stock market machine. Same with SEO. Same with the internet. Having more 100% AI generated content will just be another layer of the same.
Of course, I'm not fundamentally against any of that stuff. Personally, I would choose to keep them all but in moderation and with more introspection. Instead of saying that access to the Internet is a human right, I think freedom to live without layer after layer of the Internet imposing itself into your life is what should be held up as an ideal. But we seem to be going in the opposite direction. This is yet another new layer is selling itself as the solution to all our problems in the previous layer.
I will never accept that AI deserves anything like human rights by virtue of intelligence any more than I would accept that someone retarded doesn't deserve them. But I do see the risk for it to further dehumanize existing human rights. When half our coworkers are empty AI generated husks that disappear like a fart in the wind at the end of the work day, how will that affect how we treat the remaining human ones?
The thing that scares me most is that ethicists who talk about the deep questions seem to be more fascinated by above situation from the machine side than the human side (perhaps humans who are on the docket to be replaced by a machine are only barely human anyway /s). And the ethicists who claim to be people-focused seem to be mainly fixated on making sure that white people suffer as much as United States protected classes and that no one violates copyright law.
It seems like we're ready for another industrialization where everyone is working off an implicit assumption the ends of this process are going to justify the means and it won't be til we discover that wealthy rent seekers have been greasing the wheels of the machine with thousands of children that we realize maybe we should take a step back.
In summary, I think AI is a similar type of risk to industrialization. But the thing I worry about is that the way it is being pitched now -- like a religion or new consciousness -- is going to create and exacerbate problems with inequality and exploitation in a way that will make us feel foolish later.
[Sorry, turned into kind of a rant. Apologies if a little off-color or off-topic, but I figure I may as well post]