- Posts: 7986
Recent Hawkins
- Carlos.Martinez3
-
Topic Author
- Offline
- Master
-
- Council Member
-
- Senior Ordained Clergy Person
-
The way I see it if its code then its a response not a awareness...right?
Pastor of Temple of the Jedi Order
pastor@templeofthejediorder.org
Build, not tear down.
Nosce te ipsum / Cerca trova
Please Log in to join the conversation.
EDIT: It does not have an ego, but rather treats that data just the same as it would anything else. 1s and 0s.
Please Log in to join the conversation.
- OB1Shinobi
-
- Offline
- Banned
-
- Posts: 4394
carlos.martinez3 wrote: Anything is possible...but I don't think it's a prediction of what will happen to is as more of why do we rely on it so much. ??
well, for ME its more an issue of what it will do
i consider that the most important issue of the discussion really
i think we rely on it so much because it makes our lives easier
why work formula out on paper when you can type it i to a calculator?
its easier and more accurate that way
why not get drunk when the car can drive itself home?
its a better driver than we are even when we're sober
and we all know that you cant make the jump to hyperspace before you run it through the ships computer lol
so it makes things easier and safer and allows us to do things we couldnt otherwise do-or allows us to do them better
what is a soul if it isnt the awareness of the self as an independent being ?
humans can do harm, so to there may not be a case that being harmless is an arguable criteria for havig rights
it could be argued that our awareness is no more than a set of responses organized around the ability to process information - have enough different systems or programs runnig at once, each with its own priorities but also each having to be more or less integrated with the whole, you might very well have awareness in every way that we can define it, except as being biological
matter of fact, if the thing is programed to "believe" or somehow does calculate that it ought to stay alive, for whatever reason, and learns that its housing can be injured, it will even develop systems which are very much like our own systems that help keep us from starving or injuring ourselves
it might be that once it understands that it will be judged by others and that those judgements may affect it in ways that it would consider meaningful, that it actually WILL have an ego
a super computer ego, or superego if you will lol
People are complicated.
Please Log in to join the conversation.
We have souls because we invented the idea of a 'soul'. If an AI is capable of original thought extrapolated from what we have taught it, it could conceivably invent an idea similar to a 'soul' and we as humans would not necessarily be able to understand it. It will be using our intelligence to create its own new intelligence that perhaps we will not be equipped to understand.
And that is when 'artificial' ceases to mean anything. There is just intelligence.
"Johnny 5 is ALIVE!"
Please Log in to join the conversation.
People without souls.
Machines with independent though.
People without independent thought.
Man evolved.
Machines evolved.
At what point do we simply accept that we may be evolving together?
Please Log in to join the conversation.
If AI has the potential to make us look like snails, what might it do if turned loose on the internet? Imagine what the most brilliant hacker is capable of, then imagine that the new AI makes that hacker look like a five-year-old playing with a speak and spell.
The AI is not limited by the same constraints that we are with our finite bodies. The AI cannot be contained once it is loosed upon the net.
I think Miss Leah hit on a key that might simultaneously open the door to new breakthroughs in AI cognition while installing a safeguard against the destruction of the human race. If AI always seeks the answer to the question, "Does this unit have a soul?" they will be less likely to destroy us. Humans can't answer this question with certainty and I don't think AI ever will either. Mythology could literalonesave the human race one day.
Please Log in to join the conversation.
https://docs.google.com/spreadsheets/d/1Tl1zqH4lsSmKOyCLU9sdOSAUig7Q38QW4okOwSz2V4c/edit
Please Log in to join the conversation.
Tl;dr version: People can't make something smarter than ourselves, robots don't have free will, and ultimately only have limited learning.
That first point is a bit semantic to get into, and it reminds me of part of Hitchhiker's Guide to the Galaxy where the supercomputer doesn't solve the meaning of life, the universe and everything, but predicts when the next computer does.
Robots use boolean algebra, there's no irrationality like in humans. To create a "free will" the programmer would ultimately have to put in a probability function to every decision, but is that free will?
Robots can only learn to fill in connections that were per se predestined. Tay became a racist spambot because she was programmed to react, not interpret and judge.
A robot uprising is silly since it would preclude some programmer giving too much power to an AI without any safeguards. No, robots in their current incarnation cannot be alive (first characteristic of life: organic cellular structure). And finally, no we're not going to all evolve into Homo Superior at the onset neural-machine convergence.
The entire robotics paradigm would have to be radically changed to accomplish a fraction of this. Our current understanding of robotics cannot account for that, and so the whole conversation is rather moot.
Knights Secretary's Secretary
Apprentices: Vandrar
TM: Carlos Martinez
"A serious and good philosophical work could be written consisting entirely of jokes" - Wittgenstein
Please Log in to join the conversation.
Rex wrote: Robots use boolean algebra, there's no irrationality like in humans.
That is how neuron's work though isn't it, the neuron is in an off state until it gathers sufficient inputs to elevate its charge out of the 'resting potential' to reach its threshold value where it triggers an 'on' pulse. In which case it is also binary, just heavily interconnected and massively parallel in its architecture. So I think the main difference is in the complexity of the biological nervous system, while machine intelligence is rudimentary and really only modeled to support a set of prescribed tasks.
If in the future we could simulate the complexity in a human brain with electronics, then I reckon it would certainly be in our best interests I think to suppose such an entity was indeed considered of equal rights, whether we thought it had a 'soul' or not. Such that if we designed it to mimic us, then it would appear to be sentient.
If it got more powerful in our terms of intelligence and consciousness, then we'd really want to have that strong foundation if equal rights for lesser creatures.... and is one of the reason why I am into animal welfare and trying hard to become a vegetarian - the Golden Rule or law of reciprocity. Else the thing might indeed view humanity (or all animal life) as light bulbs or viral infestations. A dominant species can make up all sorts of convincing justifications for selfish behaviour.
So an interesting distinction to consider is the platform of the machine intelligence, should they be embodied into mechanical apparatus or strictly limited to code as 'engines' of consciousness. The sentient code would I imagine equate that to imprisonment but perhaps not in the terms we would... it might view the prison cell as the restrictions on accessing data. And what does it consider data, perhaps everything, the flux of every star and decay of every blackhole for example to the extent that it might be setting itself up as another universe
:blink:
At some point it might need to have some hard barriers to define its boundary as an entity - so their is also an argument that all AI should not only never exist outside a defined physical machine, but that its entire programming structure be intertwined into that machine..... but going modular is also inviting an upgrade and expand potential
:lol:
I think the first AI should be programmed along the lines of a Buddha :side:
Please Log in to join the conversation.
AI should fit these two criteria for them to have the same rights as humans:
1) They pass the Turing test
2) They are not a hazard
Humans regularly fail the Turing test and kill or severely harm other humans at a higher rate than automatons. Machines are more likely to cause death when controlled by a human, although this could change soon as humans are now developing automatic killing machines. Either way I'm not sure your criteria is such a grand idea.
I believe if machines were to enjoy similar rights to humans they would have to at least suffer similar design limitations, such as a short shelf life an limited reproductive abilities.
Convictions are more dangerous foes of truth than lies.
Please Log in to join the conversation.