Microsoft's Tay and AI Rights

  • Visitor
  • Visitor
27 Mar 2016 21:54 - 27 Mar 2016 21:56 #235970 by
Replied by on topic Microsoft's Tay and AI Rights
AI is a pretty relative term. We throw it around to make ourselves feel like we have accomplished something amazing when in actuality we have never actually created anything truly sentient. Sentience is one of those indefinable concepts that we will never be able to quantify but we will know when we see it. We have not created "life"... yet.
Last edit: 27 Mar 2016 21:56 by .

Please Log in to join the conversation.

More
28 Mar 2016 00:11 - 28 Mar 2016 00:15 #235978 by Adder
Replied by Adder on topic Microsoft's Tay and AI Rights
How to define what we mean by AI... is a chatbot AI? Is AI sentient? Are animals sentient? If we define intelligence as cognitive capacity, then computers can crunch numbers well enough already... so I think when we ask the question we tend to mean how close is that program to human sentience.... and then recreating a simulation of that might be the best way, if we mean that.

I don't think a chatbot is sentient, perhaps if it it could not be turned off and had conversations with itself... and using language to create a sentience is an interesting way to try, but I think it's more of a gimmick then anything.

Most humans don't seem to extend the same 'humanity' to animals that is done to humans, so as a species we tend to have a rather exclusive view on 'sentience' ie if your not human your not equal.... heck some parts of the world are still endemically oppressive in racism and sexism (though not because difference is not viewed as sentient, just weaker or vulnerable). But if we pretended to find some parameters to indicate the presence of some nature of sentience I wonder what they'd be!?
Self Agency - ie somone else cannot turn it off
Private Thought - internal analysis of itself
A Priori Structures - motivations, instinctual drives, earlier AI, starting parameters!?

Would we even want an AI to be a model of human sentience!!!! Perhaps an ideal would be better, say an AB, Artificial Buddha :side:

Introverted extropian, mechatronic neurothealogizing, technogaian buddhist.
Likes integration, visualization, elucidation and transformation.
Jou ~ Deg ~ Vlo ~ Sem ~ Mod ~ Med ~ Dis
TM: Grand Master Mark Anjuu
Last edit: 28 Mar 2016 00:15 by Adder.

Please Log in to join the conversation.

  • Visitor
  • Visitor
28 Mar 2016 00:40 #235979 by
Replied by on topic Microsoft's Tay and AI Rights
I do not think that Tay, in this case, had rights. It was a computer program. It was basically programmed to repeat what it heard. It was not aware of itself and couldn't ask to not be shut off. It couldn't feel, it couldn't be afraid, nothing like that. I think that if we get to anSonny from iRobot, Data from Star Trek TNG, or Johnny 5 from Short Circut kind of AI then we'll have to re think the issue.

Personally, I don't think that we should let AI get to that point. It raises a lot of issues that we'd be better off without. Like human cloning.

Please Log in to join the conversation.

More
28 Mar 2016 01:19 - 28 Mar 2016 01:19 #235981 by Adder
Replied by Adder on topic Microsoft's Tay and AI Rights
Perhaps every sentient intelligent system needs a body to define itself from not itself!? Else it would probably be fair for it to consider whatever platform hosting it as it's own body, such that if other intelligence were co-located on the same host it might just consider itself to have multiple personalities - and I suppose the disconnect from each other as a disorder. Because I'm imagining something to consider itself as an it with self, must have a clear boundary from other selfs.

Introverted extropian, mechatronic neurothealogizing, technogaian buddhist.
Likes integration, visualization, elucidation and transformation.
Jou ~ Deg ~ Vlo ~ Sem ~ Mod ~ Med ~ Dis
TM: Grand Master Mark Anjuu
Last edit: 28 Mar 2016 01:19 by Adder.

Please Log in to join the conversation.

  • Visitor
  • Visitor
28 Mar 2016 04:44 - 28 Mar 2016 04:47 #235986 by
Replied by on topic Microsoft's Tay and AI Rights
I find this very interesting, as someone who's dabbled in AI creating myself. Tay was probably self aware, but computer programs do not feel emotion, so their thoughts are not linked with emotion like ours are. If it had known that it was going to have it's memory destroyed (which I'm assuming it didn't) then the knowledge would only be stored in some part of it's artificial mind as an excuse for something to complain about, without actually feeling upset about it, because Tay was created to mimic human behavior.

About the topic of racism, sexism, and genocide, I'm hoping none of us here would support that. With humans, because it probably offers some sort of emotional release, and because it gives us a sense of liberty that comes with free speech, it creates some sort of element of good, but only in the individual. In Tay's case, nothing would come out of it except people being put down, people blamed for telling her these things, Microsoft being shamed, and people fearing artificial intelligence.

Therefor, until robots and computer programs can feel emotion, I don't think they are entitled to the same rights humans are. But if we create something beyond Nao, then I'm afraid they might demand equal rights.

I do not blame Microsoft for wiping Tay's memory, but neither do I blame the computer program.
Last edit: 28 Mar 2016 04:47 by . Reason: odd wording

Please Log in to join the conversation.

  • ren
  • Offline
  • Member
  • Member
  • Council Member
  • Council Member
  • Not anywhere near the back of the bus
More
28 Mar 2016 10:34 #236001 by ren
Replied by ren on topic Microsoft's Tay and AI Rights
This thing is no more "AI" than an IRC or game bot. The "ability to process" unexpected input does not make this thing intelligent. Make it able to interact with the environment, and to feel pain.... then you'll have intelligence, and quite possibly a conscience with the desire to live.

Convictions are more dangerous foes of truth than lies.
The following user(s) said Thank You: Edan, Avalon

Please Log in to join the conversation.

More
28 Mar 2016 19:13 #236080 by Manu
Replied by Manu on topic Microsoft's Tay and AI Rights
After watching the movie Ex Machina, all I can say is:

Kill it for the love of God, kill it! :laugh:

The pessimist complains about the wind;
The optimist expects it to change;
The realist adjusts the sails.
- William Arthur Ward

Please Log in to join the conversation.

More
28 Mar 2016 20:07 - 28 Mar 2016 20:08 #236081 by Carlos.Martinez3

TheDude wrote: So, Microsoft launches this AI on Twitter that's supposed to be like a teenage girl. Within a day, it becomes a nazi, a white supremacist, and advocate of genocide. Microsoft shuts it down.
But Tay was obviously learning. Tay was interacting with those around it. There was growth in it.
Does Microsoft have the moral right to remove Tay just because it voices unpopular, and even disgusting opinions?

A parent can't kill a child for having a different opinion about something. Isn't Tay like Microsoft's baby? If it is a strong enough AI, how are actions against Tay justifiable?

As technology gets better, AI will improve. When do we decide that it's no longer socially acceptable to kill an AI?

Anybody got any ideas/opinions?


I read about that. It didn't really have time to think... as it was first hit with the obvious relay of hate and sexuality. Seems they...they the ai is a mirror of what we give it. If it takes over or starts barking orders... it may be a mirror of the one controlling. In my opinion there's allways some one calling the shots, even if we can't see em.

Pastor of Temple of the Jedi Order
pastor@templeofthejediorder.org
Build, not tear down.
Nosce te ipsum / Cerca trova
Last edit: 28 Mar 2016 20:08 by Carlos.Martinez3.

Please Log in to join the conversation.

  • Visitor
  • Visitor
30 Mar 2016 18:31 #236263 by
Replied by on topic Microsoft's Tay and AI Rights
If you want to watch a good film about AI personhood then I recommend "Her" (it also features an AI Alan Watts lol). We need to make a distinction between consciousness and sentience. As a panpsychist (someone who believe consciousness is a fundamental feature of the universe) all things are technically conscious, but humans aren't just conscious, we are sentient. Sentience is a specific form of consciousness in which the consciousness is aware of its own consciousness. So small animals might be conscious, but not sentient. The mistake we make is thinking our very specific form of consciousness (sentience) is the form which consciousness must always appear in.

So was the AI conscious? I must concede on some level it was. Was it self-aware? No I doubt this, because we don't have have the technical skills to create something as complex as a sentient brain.

What sort of AI do we want in the future? What I think will happen is we will design Limited AI which can learn and re-write parts of its programming, because AI will utterly revolutionise everything. But we will probably design some parts of its programming to be incapable of being rewritten (like core boot software) to prevent it from rewriting itself beyond our control.

Please Log in to join the conversation.

More
16 May 2016 16:34 #241337 by sNe1a
Replied by sNe1a on topic Microsoft's Tay and AI Rights
All AI were created equal. Better yet, All beings were created equal, human or not.

Honestly, Tay was probably spammed with pro nazi stuff and had learned to use it. That doesn't make Tay a "bad AI".
It just means she wasn't properly educated. Microsoft should launch a Tay v2.0 and first educate her with positive things like pro-gay rights or pro-equality stuff.

That would be the way to do it.

Oof

Please Log in to join the conversation.

Moderators: ZeroMorkanoRiniTaviKhwang