- Posts: 8163
Microsoft's Tay and AI Rights
Please Log in to join the conversation.
I don't think a chatbot is sentient, perhaps if it it could not be turned off and had conversations with itself... and using language to create a sentience is an interesting way to try, but I think it's more of a gimmick then anything.
Most humans don't seem to extend the same 'humanity' to animals that is done to humans, so as a species we tend to have a rather exclusive view on 'sentience' ie if your not human your not equal.... heck some parts of the world are still endemically oppressive in racism and sexism (though not because difference is not viewed as sentient, just weaker or vulnerable). But if we pretended to find some parameters to indicate the presence of some nature of sentience I wonder what they'd be!?
Self Agency - ie somone else cannot turn it off
Private Thought - internal analysis of itself
A Priori Structures - motivations, instinctual drives, earlier AI, starting parameters!?
Would we even want an AI to be a model of human sentience!!!! Perhaps an ideal would be better, say an AB, Artificial Buddha :side:
Please Log in to join the conversation.
Personally, I don't think that we should let AI get to that point. It raises a lot of issues that we'd be better off without. Like human cloning.
Please Log in to join the conversation.
Please Log in to join the conversation.
About the topic of racism, sexism, and genocide, I'm hoping none of us here would support that. With humans, because it probably offers some sort of emotional release, and because it gives us a sense of liberty that comes with free speech, it creates some sort of element of good, but only in the individual. In Tay's case, nothing would come out of it except people being put down, people blamed for telling her these things, Microsoft being shamed, and people fearing artificial intelligence.
Therefor, until robots and computer programs can feel emotion, I don't think they are entitled to the same rights humans are. But if we create something beyond Nao, then I'm afraid they might demand equal rights.
I do not blame Microsoft for wiping Tay's memory, but neither do I blame the computer program.
Please Log in to join the conversation.
Convictions are more dangerous foes of truth than lies.
Please Log in to join the conversation.
Kill it for the love of God, kill it! :laugh:
The pessimist complains about the wind;
The optimist expects it to change;
The realist adjusts the sails.
- William Arthur Ward
Please Log in to join the conversation.
- Carlos.Martinez3
-
- Offline
- Master
-
- Council Member
-
- Senior Ordained Clergy Person
-
- Posts: 7986
TheDude wrote: So, Microsoft launches this AI on Twitter that's supposed to be like a teenage girl. Within a day, it becomes a nazi, a white supremacist, and advocate of genocide. Microsoft shuts it down.
But Tay was obviously learning. Tay was interacting with those around it. There was growth in it.
Does Microsoft have the moral right to remove Tay just because it voices unpopular, and even disgusting opinions?
A parent can't kill a child for having a different opinion about something. Isn't Tay like Microsoft's baby? If it is a strong enough AI, how are actions against Tay justifiable?
As technology gets better, AI will improve. When do we decide that it's no longer socially acceptable to kill an AI?
Anybody got any ideas/opinions?
I read about that. It didn't really have time to think... as it was first hit with the obvious relay of hate and sexuality. Seems they...they the ai is a mirror of what we give it. If it takes over or starts barking orders... it may be a mirror of the one controlling. In my opinion there's allways some one calling the shots, even if we can't see em.
Pastor of Temple of the Jedi Order
pastor@templeofthejediorder.org
Build, not tear down.
Nosce te ipsum / Cerca trova
Please Log in to join the conversation.
So was the AI conscious? I must concede on some level it was. Was it self-aware? No I doubt this, because we don't have have the technical skills to create something as complex as a sentient brain.
What sort of AI do we want in the future? What I think will happen is we will design Limited AI which can learn and re-write parts of its programming, because AI will utterly revolutionise everything. But we will probably design some parts of its programming to be incapable of being rewritten (like core boot software) to prevent it from rewriting itself beyond our control.
Please Log in to join the conversation.
Honestly, Tay was probably spammed with pro nazi stuff and had learned to use it. That doesn't make Tay a "bad AI".
It just means she wasn't properly educated. Microsoft should launch a Tay v2.0 and first educate her with positive things like pro-gay rights or pro-equality stuff.
That would be the way to do it.
Oof
Please Log in to join the conversation.