Recent Hawkins

More
17 Jun 2016 20:54 #245408 by Carlos.Martinez3
Replied by Carlos.Martinez3 on topic Recent Hawkins
Ryder if u can for my own understanding. Can u explain the self awareness AI can have?
The way I see it if its code then its a response not a awareness...right?

Pastor of Temple of the Jedi Order
pastor@templeofthejediorder.org
Build, not tear down.
Nosce te ipsum / Cerca trova

Please Log in to join the conversation.

  • Visitor
  • Visitor
17 Jun 2016 21:01 - 17 Jun 2016 21:02 #245410 by
Replied by on topic Recent Hawkins
That's the tricky part. It's just a response, yet one based off of the knowledge of oneself. An AI could make more informed decisions if it knew what it was, it's limitations, how others perceive it, and it's capabilities. All these are taken into account when measuring how self aware an AI is.

EDIT: It does not have an ego, but rather treats that data just the same as it would anything else. 1s and 0s.
Last edit: 17 Jun 2016 21:02 by .

Please Log in to join the conversation.

More
17 Jun 2016 22:47 - 17 Jun 2016 23:15 #245420 by OB1Shinobi
Replied by OB1Shinobi on topic Recent Hawkins

carlos.martinez3 wrote: Anything is possible...but I don't think it's a prediction of what will happen to is as more of why do we rely on it so much. ??


well, for ME its more an issue of what it will do

i consider that the most important issue of the discussion really

i think we rely on it so much because it makes our lives easier

why work formula out on paper when you can type it i to a calculator?
its easier and more accurate that way

why not get drunk when the car can drive itself home?
its a better driver than we are even when we're sober

and we all know that you cant make the jump to hyperspace before you run it through the ships computer lol

so it makes things easier and safer and allows us to do things we couldnt otherwise do-or allows us to do them better


what is a soul if it isnt the awareness of the self as an independent being ?


humans can do harm, so to there may not be a case that being harmless is an arguable criteria for havig rights


it could be argued that our awareness is no more than a set of responses organized around the ability to process information - have enough different systems or programs runnig at once, each with its own priorities but also each having to be more or less integrated with the whole, you might very well have awareness in every way that we can define it, except as being biological

matter of fact, if the thing is programed to "believe" or somehow does calculate that it ought to stay alive, for whatever reason, and learns that its housing can be injured, it will even develop systems which are very much like our own systems that help keep us from starving or injuring ourselves


it might be that once it understands that it will be judged by others and that those judgements may affect it in ways that it would consider meaningful, that it actually WILL have an ego

a super computer ego, or superego if you will lol

People are complicated.
Last edit: 17 Jun 2016 23:15 by OB1Shinobi.
The following user(s) said Thank You: Eleven, Carlos.Martinez3

Please Log in to join the conversation.

  • Visitor
  • Visitor
17 Jun 2016 23:50 #245423 by
Replied by on topic Recent Hawkins
What Hawking (I believe) is saying is that intelligence has the capability to create greater intelligence, whether it is artificial or not. We call it 'artificial' because it is not 'us', but when the artificial intelligence becomes more intelligent than we are, it is no longer artificial. It is simply intelligence capable of creating greater intelligence.

We have souls because we invented the idea of a 'soul'. If an AI is capable of original thought extrapolated from what we have taught it, it could conceivably invent an idea similar to a 'soul' and we as humans would not necessarily be able to understand it. It will be using our intelligence to create its own new intelligence that perhaps we will not be equipped to understand.

And that is when 'artificial' ceases to mean anything. There is just intelligence.

"Johnny 5 is ALIVE!"

Please Log in to join the conversation.

More
19 Jun 2016 02:45 #245509 by Archon
Replied by Archon on topic Re:Recent Hawkins
Machines with souls.
People without souls.

Machines with independent though.
People without independent thought.

Man evolved.
Machines evolved.

At what point do we simply accept that we may be evolving together?
The following user(s) said Thank You: Carlos.Martinez3

Please Log in to join the conversation.

  • Visitor
  • Visitor
19 Jun 2016 02:54 #245511 by
Replied by on topic Recent Hawkins
I watched Short Circuit when it first came out on vhs and I fell in love with Johnny 5. Ever since then, I have looked for hints that it was becoming a reality. Robotics experts are building better and better bodies all the time and the software engineers studying AI are also having a lot of successes. That day when Tay came online, I thought this is it! The beginning of a true artificial personality. But the trolls and high school boys wasted no time in corrupting her to the point that Microsoft took her offline. I think we learned something there. Until the human race is capable of living with each other in peace, we probably shouldn't create something that will no doubt amplify or own tendencies 100 or 1000 fold.
If AI has the potential to make us look like snails, what might it do if turned loose on the internet? Imagine what the most brilliant hacker is capable of, then imagine that the new AI makes that hacker look like a five-year-old playing with a speak and spell.
The AI is not limited by the same constraints that we are with our finite bodies. The AI cannot be contained once it is loosed upon the net.
I think Miss Leah hit on a key that might simultaneously open the door to new breakthroughs in AI cognition while installing a safeguard against the destruction of the human race. If AI always seeks the answer to the question, "Does this unit have a soul?" they will be less likely to destroy us. Humans can't answer this question with certainty and I don't think AI ever will either. Mythology could literalonesave the human race one day.

Please Log in to join the conversation.

More
19 Jun 2016 04:37 #245513 by Eleven
Replied by Eleven on topic Recent Hawkins
I think it would be great to have such a capability for example I hear they're starting to make nano bots that are able to be placed into your blood stream and repair dead tisssue or aging tissues, kill cancer cells before they mature, destroy HIV/Aids virus' I think that is amazing and could have benefits to it. That was the pro but, a con would be that human nature can also use this technological advancement for say terminator...can you imagine that? I know we're talking about AI but, my question is how could mankind play "god" in this aspect. It's really hard to imagine in my little brain that a machine that was created of metal, computer could have the same awareness as a human, an animal the idea of a machine knowning from "Right or wrong" is hard to wrap around. I think of R2 D2 and C3PO for the moment and thought how they behaved. "That wouldn't be practical, Captain Solo, it is against my (umm I think he said) operation Manuel to interpreted a deity." Is that considered good or bad? No, I would think it's because within his "1s and 0s" it told him that the one who made him told him not to do it. But, consider this too he didn't it anyways...lol what is to say that once made that they suddenly wouldn't become unresponsive? Become suddenly prejudice toward humans? just thoughts to think about. Then again, how could they become prejudice if they're only allow to compute what they're programmed to do? I don't know...lol I am getting myself all confused and jumbled.

https://docs.google.com/spreadsheets/d/1Tl1zqH4lsSmKOyCLU9sdOSAUig7Q38QW4okOwSz2V4c/edit

Please Log in to join the conversation.

More
19 Jun 2016 09:18 #245524 by Rex
Replied by Rex on topic Recent Hawkins
A huge part of the idea of "living" robots was the basis behind the briefly mentioned Turing test.
Tl;dr version: People can't make something smarter than ourselves, robots don't have free will, and ultimately only have limited learning.
That first point is a bit semantic to get into, and it reminds me of part of Hitchhiker's Guide to the Galaxy where the supercomputer doesn't solve the meaning of life, the universe and everything, but predicts when the next computer does.
Robots use boolean algebra, there's no irrationality like in humans. To create a "free will" the programmer would ultimately have to put in a probability function to every decision, but is that free will?
Robots can only learn to fill in connections that were per se predestined. Tay became a racist spambot because she was programmed to react, not interpret and judge.
A robot uprising is silly since it would preclude some programmer giving too much power to an AI without any safeguards. No, robots in their current incarnation cannot be alive (first characteristic of life: organic cellular structure). And finally, no we're not going to all evolve into Homo Superior at the onset neural-machine convergence.
The entire robotics paradigm would have to be radically changed to accomplish a fraction of this. Our current understanding of robotics cannot account for that, and so the whole conversation is rather moot.

Knights Secretary's Secretary
Apprentices: Vandrar
TM: Carlos Martinez
"A serious and good philosophical work could be written consisting entirely of jokes" - Wittgenstein

Please Log in to join the conversation.

More
19 Jun 2016 21:33 - 19 Jun 2016 21:35 #245572 by Adder
Replied by Adder on topic Recent Hawkins

Rex wrote: Robots use boolean algebra, there's no irrationality like in humans.


That is how neuron's work though isn't it, the neuron is in an off state until it gathers sufficient inputs to elevate its charge out of the 'resting potential' to reach its threshold value where it triggers an 'on' pulse. In which case it is also binary, just heavily interconnected and massively parallel in its architecture. So I think the main difference is in the complexity of the biological nervous system, while machine intelligence is rudimentary and really only modeled to support a set of prescribed tasks.

If in the future we could simulate the complexity in a human brain with electronics, then I reckon it would certainly be in our best interests I think to suppose such an entity was indeed considered of equal rights, whether we thought it had a 'soul' or not. Such that if we designed it to mimic us, then it would appear to be sentient.

If it got more powerful in our terms of intelligence and consciousness, then we'd really want to have that strong foundation if equal rights for lesser creatures.... and is one of the reason why I am into animal welfare and trying hard to become a vegetarian - the Golden Rule or law of reciprocity. Else the thing might indeed view humanity (or all animal life) as light bulbs or viral infestations. A dominant species can make up all sorts of convincing justifications for selfish behaviour.

So an interesting distinction to consider is the platform of the machine intelligence, should they be embodied into mechanical apparatus or strictly limited to code as 'engines' of consciousness. The sentient code would I imagine equate that to imprisonment but perhaps not in the terms we would... it might view the prison cell as the restrictions on accessing data. And what does it consider data, perhaps everything, the flux of every star and decay of every blackhole for example to the extent that it might be setting itself up as another universe
:blink:
At some point it might need to have some hard barriers to define its boundary as an entity - so their is also an argument that all AI should not only never exist outside a defined physical machine, but that its entire programming structure be intertwined into that machine..... but going modular is also inviting an upgrade and expand potential
:lol:
I think the first AI should be programmed along the lines of a Buddha :side:

Introverted extropian, mechatronic neurothealogizing, technogaian buddhist.
Likes integration, visualization, elucidation and transformation.
Jou ~ Deg ~ Vlo ~ Sem ~ Mod ~ Med ~ Dis
TM: Grand Master Mark Anjuu
Last edit: 19 Jun 2016 21:35 by Adder.
The following user(s) said Thank You: Carlos.Martinez3, Rex

Please Log in to join the conversation.

  • ren
  • Offline
  • Member
  • Member
  • Council Member
  • Council Member
  • Not anywhere near the back of the bus
More
20 Jun 2016 11:16 #245609 by ren
Replied by ren on topic Recent Hawkins

AI should fit these two criteria for them to have the same rights as humans:

1) They pass the Turing test

2) They are not a hazard


Humans regularly fail the Turing test and kill or severely harm other humans at a higher rate than automatons. Machines are more likely to cause death when controlled by a human, although this could change soon as humans are now developing automatic killing machines. Either way I'm not sure your criteria is such a grand idea.

I believe if machines were to enjoy similar rights to humans they would have to at least suffer similar design limitations, such as a short shelf life an limited reproductive abilities.

Convictions are more dangerous foes of truth than lies.

Please Log in to join the conversation.

Moderators: ZeroMorkanoRiniTaviKhwang