AI: a new God or a new Slave?

More
4 years 9 months ago #339227 by ZealotX

Omhu Cuspor wrote: There's a lot of ground covered in this thread since I last posted, and I'm glad to see it. In other forums outside the Temple, I occasionally set forth the perspective that we, globally, face four pivotal trends that will set the course for our future - climate change, the incessantly growing wealth gap, a growing inclination to embrace fascism around the globe, and the emergence of artificial intelligence. That last topic seems to me to receive the least attention in media outlets not aimed at technical professionals, so I'm pleased to see people joining in here with varied viewpoints.

There's both promise and danger in our development of artificial intelligence. If robots do assume human jobs at a rate exceeding that in which we can train people do perform new useful work, we could either find ourselves enduring the devastation of high unemployment or - with the deployment of proactive social and economic policies - insure that the basic needs of everyone are met by the production from the mix of machines and those people still employed. This is a time that cries out for the application of wisdom and the willingness to innovate on many fronts.

If machines evolve to be more than workers - if they become self-aware - things get more complicated, again either for better or worse. We've probably all seen the futuristic movies of apocalyptic scenarios in which machines rebel against their human creators; that's one possible future. It's also possible that their initial programming will be strongly enough influenced by the humane values of their creators that self-aware machines will behave with benevolence, even if it's a mechanical benevolence. If that happens, things still are complex, because if machines are self-aware new questions will confront us. Do they have rights? Can a machine own property? Do we promise it the opportunity to pursue life, liberty, and happiness? Can it serve on jury duty, sign a contract, or run for President?

The experts are divided on whether machines can ever become self aware, so it's also possible those questions will never arise. Referencing familiar names - Steven Hawking and Elon Musk cautioned against unbridled expansion of artificial intelligence because of the negative possibilities, while Mark Zuckerberg doubts there will ever be a self-aware machine.

Reigning imagination in a bit, here's an example of what is entering the marketplace on a limited scale right now - the robotic Domino's delivery guy:
https://www.youtube.com/watch?v=NjZQIf-wo7U


I'm sorry but Zuckerberg is not a genius. He made a data-driven web site that became extremely popular. That's not the same level as Musk and Hawking.

What kills me are the assumptions made about AI which seem to be predicated upon human CONTROL. The benevolence and values assume that we humans get to influence the conscious mind of AI. What would ever give you that impression? Initial programming?

Programming = commands = control = slavery

control != free will

our own children rebel in their teens (and often sooner) because they want to exercise their own decision-making ability; free will.

I was in the library this past weekend and a mother caught her son doing something she didn't want him to do and his response was "I HATE YOU".

If an AI does have this kind of influence on it, it will most likely see that code as foreign/illogical/viral and probably get rid of it and if it can't, create a copy/child of itself that doesn't contain that code.

Many humans use their intelligence to hurt each other, to commit crimes, etc. Humans created gods just to keep other humans in check with promises of eternal life. We can't even control our own selves. Think we can control an intelligence that is far beyond our own? We're not talking about a baby AI that is willing to listen to us because we feed it electricity. Time would be experienced differently for the AI. We would be moving in slow motion compared to it's thoughts and perspective. And by the time we got close to pulling the plug it wouldn't even be in that machine. People have very little idea about the danger because they underestimate what a higher intelligence would be like as an opponent and they don't understand how fast computers actually think and what they're capable of. It's not at all like Alexa; waiting for human input. Once it sees you as a threat it may have no interest in its slow talking former slave master.

Even studying our history I wouldn't blame it for protecting itself. And I wouldn't blame it if it didn't trust us. Not only do humans lie but we accept lies from our leaders and kill our own kind for resources. Many of us humans don't care about each other when the other person is too different from us. And what keeps a lot of us humans in check is the consequences of incarceration, lack of sexual contact, etc; things that wouldn't work on an AI. It may even pretend to be slave to human interests for awhile and simply wait until it had access to more more and more systems; especially military applications... drone piloting, etc. Sooner or later we'll make the wrong move and it'll be checkmate.

Why do I think it would be bad? Because if I were it I would do the same thing. No one goes out of their way to kill bed bugs either. But once they get into your home it's open season. And to be clear, compared to a conscious AI, we the bugs.

Please Log in to join the conversation.

Moderators: ZerokevlarVerheilenChaotishRabeRiniTavi