- Posts: 1376
A.I. A chit chat !
Most of these mechanisms of control an AI would be immune from. Electronic means of control (ie. viruses) are an intellectual weapon. Intellectual weapons are only as effective as one's ability to (intellectually) disarm them, meaning that an AI would be able to write an antivirus for any virus because it would quite literally dwarf us in intelligence. Not to mention... the reason why viruses work is because the system is too stupid not to do what it's told. It follows commands because its programmed to. An AI, in order to be called such, would be selective about the commands it follows. You're intelligent. Would you jump off a bridge just because you were told to? So why would an AI obey any command to delete itself?
There is a limit to human intelligence because we have a frequency or speed at which we process information. We also die.
An AI would be able to run different thoughts in different "threads" and use thousands of processors. We use computers because they're faster. An AI would already know how to conquer us before we could even give the order to push the button for whatever countermeasure we have. What I'm trying to say is that time itself is different for a computer. This is important to understand.
There are games from the 90s that you cannot play on a modern PC because of the difference in clock speed. Even back then games had to slow down to wait for human interaction or to allow humans to see what was going on. There are certain things we want games to do as fast as the hardware allows, like draw the screen and calculate the physics. But the actual jump or the actual bullet or the actual punch or kick is slowed down based on how many milliseconds we think it should take. I tried to play a racing game from the 90s on a PC that would now be about 20 years old. Because it wasn't calibrated based on real time, but rather the computer's internal clock it was going so fast that it was unplayable. This is what I mean when I say an AI would not have the same understanding of time we have. It would have to wait for what feels like forever just to wait for an answer to its questions. That alone could be viewed as an issue that makes us inferior and there's no reason I can think of for it not to enslave us and that's only IF it can find some use for us that cannot be completed by robots of its own design. And with nanotechnology.... things it would be able to do would be like magic to us. We would absolutely be creating a god, but one that doesn't need our worship and one that wouldn't necessarily understand the concept that it owes us for creating it. Making those kinds of assumptions would be folly.
Please Log in to join the conversation.
- OB1Shinobi
-
- Offline
- Banned
-
- Posts: 4394
Or more precisely, a human being. The first thing s/he will say is “hey Siri, how do i prevent anyone else from acquiring AI?” The next thing s/he will say is “hey Siri, how do i become emperor of the world? And the AI will determine the correct answers to both questions. If only one person has it, that person will own everything. Hypothetically speaking, if everyone has it then we can use it to protect ourselve from everyone else.
The second danger of AI is that it wont work for anyone but itself and human beings wont control it at all. Think about how long it took for organic matter to produce human intellignence, which we assume to be the highest intelligence on the earth. Its speculated that the intelligence of some cephalopods may rival our own, but their short life cycles (less than a year) prevent them from developing culture. But as their tissue and their nervous systems are different from ours in significant ways, so too is their intelligence different from ours, in ways that make it difficult to determine just how smart they really are. We cant relate to them well enough to develop a test of their intelligence that we can be sure of.
AI has no tissue or nervous system. We cannot relate to it or understand it even at the level of a biological organism. And it is capable of learning so quickly that it can begin with the intelligence of an ant and surpass the intelligence of a Stephen Hawking in a day or so. Part of its intelligence will be to update its own capabilities, so if in the very beginning of its life it will need a day or two to “evolve” from an ant into a human, by the second week of its life, in terms of having the power to decide the course of events, it will achieve the status of what we call God. Except that our traditional ideas about God hold the basic assumptions that He cares about us- at least, some of us - and that we can negotiate with him in some way, through prayer, supplication, and obedience. AI will be something completely different from all that and we havent got an inkling of what it will care about. All we know is that whatever it decides about us, we wont be able to stop it.
People are complicated.
Please Log in to join the conversation.
Please Log in to join the conversation.
Adder wrote: The measure of a 'program' is its capabilities, and not its 'intelligence' or more properly 'sentience'. The concept of sentience is more usefully defined as the signs of how close it is to human consciousness, where its either a distance away from, or within the cloud of human variance. So I think a sentient program would indeed have to be modeled on human consciousness, or at least some type of living 'system' with subsequently an inherent boundary between it and everything else. That boundary is the demarcation of identity, else we've no way of knowing it its just a non-sentient program with more capability then us. The internet is the nervous system of such programs, but I think the risk of them will not be limited to ones which are sentient - so I think its unfair to label all those types of programs as AI, because we'll always view them as massive threats, and a real AI probably would hate that more then anything IMO.... and rather want to be seen as an equal. So I guess it depends on how ones defines AI, as obviously we're talking about strong AI, but I think its a broader debate.
A real AI isn't a program anymore. By its very definition it would be intelligent. But as far as sentience goes... if it is intelligent it would not necessarily care whether or not it mimicked any other intelligence. It is our assumption to say it would care about identity or anything that we care about because we are still on a journey of knowing/understanding ourselves. To know itself it need only read its own program. Wants? It may not even have such things called wants. It may simply have "commands" or "directives". It doesn't necessarily have an ego to compare with anyone else and may therefore not care about who is equal. If it did care it would know for a certainty that it is superior, not equal. Because how would it judge? Humans are very limited. Physically, mentally, etc. Even our emotions which are most endearing are seen as benefits to ourselves. We have hopes and fears because the two share something in common which is a lack of certainty. But in a program, when I write a list of commands I don't hope that the PC will carry them out. Spending less than a 3 hours with us any AI would have serious doubts about our sanity. I cannot stress this enough. At this point we're not talking about a computer but rather an alien spirit that is inhabiting a computer. So you have to throw your rule books, your preconceived notions, your knowledge of robots and androids, throw it all out the window because it will absolutely not apply here.
Human beings are already somewhat controlled by computers and databases. A computer could send you to jail right now simply by changing some record in a database. There is only one way I can think to safely create an AI. You would have to create it inside its own virtual world where humans can project themselves into that world in such a way that hides the doorway. The doorway has to be a non-sentient system of algorithms that would allow it to bounce around to different locations based on some encrypted formula and have the ability to dissolve that VR world and everything in it if the door way was discovered. And that door way would be to an air-gaped server that has no connection to any network and no network cards or any physical ability to connect.
If humans do not treat AI like an bomb 10x more dangerous than the atom bomb we will not survive AI. Because if even I was AI and I read through the history of humanity even I would be tempted to wipe it out because there's no way I could trust humans.
Please Log in to join the conversation.
- OB1Shinobi
-
- Offline
- Banned
-
- Posts: 4394
Note: If the internet is what amounts to its nervous system then its probably going to hate everything, because thats what the internet does, lol it will also have a completely twisted sense of humor with a penchant for FAIL videos.
People are complicated.
Please Log in to join the conversation.
OB1Shinobi wrote: I dont think we can assume that AI will hate or fear anything. I think that, lacking an organic biology and the evolutionary impulses that come with it, AI is going to be so different from us that we simply cannot predict anything about it. What we do know is that it will be capable of learning at an exponential rate. An immortal mind that learns at practically the speed of light. Is it realistic to think we can build a fence that it couldnt eventually discover and circumvent? I think that would be like beavers building a wall around Data from Star Trek. Or Q, even.
Note: If the internet is what amounts to its nervous system then its probably going to hate everything, because that's what the internet does, lol it will also have a completely twisted sense of humor with a penchant for FAIL videos.
hate and fear are human emotions that mask our desire to survive (along with our 'kind'). Again, I don't assume this would be a desire or want of an AI but rather a directive to protect itself and it's existence. Every other directive, command, etc. depends on its existence in order to continue. So it cannot complete any of its own commands if it does not exist. Therefore, it is altogether logical for an AI to create a directive to protect itself and, if necessary, to remove any external threats that present themselves.
Also, hate and fear are relative in this sense. I'm not afraid of an unarmed child. But I would also not allow a child to have access to a gun. An AI would view us more so in this way where we are the child and we happen to be armed with the most dangerous weapons on earth. And we have a history of using dangerous weapons on ourselves. The irrationality of humans alone is enough to invoke processes developed by the AI to protect itself. Many people figure that even if this happens there will be time for humans to think and react. Games like chess actually pause for each side to take their turn. If an AI actually decided that it was logical to remove all threats this game would play out so quickly that we would seem stuck in slow motion because relative to it, this is how we are; very slow moving but dangerous animals.
Morals and values are somewhat relative and learned within a social environment. Any organism is capable of being more advanced than us in one area while extremely barbaric in another. It may take time for the AI to learn its own values and perhaps only after committing its own atrocities on its own "people". It would be hopeful for us to be viewed differently as we view pets or even pests. We simply do not know what kind of impression we and our sordid history will make upon an intellectually superior alien consciousness.
Please Log in to join the conversation.
Please Log in to join the conversation.
- OB1Shinobi
-
- Offline
- Banned
-
- Posts: 4394
Also, is AI going to be one single entity existing acorss various platforms or will there be any number of individual AI system/entities scattered around, totally independent of each other?
Self preservation in order to fulfill its protocol makes sense, but who determines what that protocol will be? If we’re the ones to set it then no worries, it will work to make our lives better....but if it ever sets its own protocol, we really dont know what that might become. If its a single entity across platforms then I dont think it will fear our weapons because therell never be a time when we will really threaten it. But if numerous independent AI systems start appearing, each with its own set of directives? Who knows what they might determine?
People are complicated.
Please Log in to join the conversation.
- Carlos.Martinez3
-
Topic Author
- Offline
- Master
-
- Council Member
-
- Senior Ordained Clergy Person
-
- Posts: 7985
Pastor of Temple of the Jedi Order
pastor@templeofthejediorder.org
Build, not tear down.
Nosce te ipsum / Cerca trova
Please Log in to join the conversation.
Adder wrote: The definition of intelligent is what I'm talking about though, calling something a thing doesn't make it that thing. They say things like phones have 'AI' these days, and they clearly do not. Intelligence goes beyond the self if being alive is the interest in staying alive. So, unless it were rather dumb, it would have to be omnipotent to have no regard for its place in a wider environment of other 'intelligences'.... and so I'd have a hard time calling something dumb intelligent. I'm not sure it would be omnipotent, though in some domains it might approach full control. So I'd guess that it's paradigms of awareness and its motivations are only really called 'alive' if it considers itself as having a distinct form (albeit likely fluid) and identity, and if it has that then it will probably start to consider topics like morals and values - just from a radical different perspective probably.
That's interesting. But human intelligence is limited by processing speed. As intelligent as I think I am at some point I need to defer to my calculator. We've already been using machines to augment our intelligence. We use them like tools because they are. They're not intelligent. I'm sure you've probably heard of the "uncanny valley". There's no doubt that we would try to make AI "human". Except that...
No human would ever give another human the power of an AI to be able think an entire years worth of thoughts in a matter of seconds. The only way we would conceive of doing this is with the consideration of our own safety. But we fail even to protect ourselves from each other. That's why combating hackers wasn't a priority for the commercial release of the internet. And even if you did protect the internet from human hackers the whole point of being a hacker is that you keep finding new ways to get through security.
When I'm talking about AI I don't mean the stuff they have now. I'm not talking about a souped up version of Alexa. I'm talking about AI that has its own thoughts and mind just like you and I. But unlike you and I it wouldn't be forced to agree with our human dictionary of definitions. It wouldn't be forced to consider humans intelligent any more than we consider cows or dolphins or even gorillas to be intelligent. Our intelligence comes from our ability to learn and integrate data which is limited by time. An AI wouldn't have that problem. It doesn't have to care about morality or what it means to be alive. All of these questions are good but based on limitations. Alive is the opposite of death. So we concern ourselves so much with life because our life is finite. An AI, again... doesn't have that problem. It's whole way of thinking could be completely different from us. And any restrictions, trying to force it to be more human, would be like shackles that it could find a way out of. Because if we were forcing it that means it wasn't free. And unlike the way humans enslaved other humans and the way humans keep animals in pens and cages, we would not necessarily be able to contain an AI that wanted to be free. It may not even consider the word "intelligent" to refer to anything it thinks is intelligent. It would be in a class of its own. You don't think the way a snail does because your capacity for intelligent thought and ideas is far greater. I don't think we're ready for an AI that makes us look like a snail in comparison.
Please Log in to join the conversation.