- Posts: 1217
Technological Singularity
https://www.youtube.com/watch?v=9TRv0cXUVQw
Enjoy!
Please Log in to join the conversation.
Attachment h393f0e8.jpg not found

Knight of TOTJO: Initiate Journal , Apprentice Journal , Knight Journal , Loudzoo's Scrapbook
TM: Proteus
Knighted Apprentices: Tellahane , Skryym
Apprentices: Squint , REBender
Master's Thesis: The Jedi Book of Life
If peace cannot be maintained with honour, it is no longer peace . . .
Please Log in to join the conversation.
Well... You say it's "strict parameters" and limited... But that's just you saying it. You arbitrarily decide what is strict and limited and you make it sound like people aren't limited in the same way. Do you know how easily children can be manipulated? Heck, do you know how easily adults can? Where exactly is this line between the robot-like and the human-like? And what of animals who are less intelligent than ourselves, can you draw a line and say until there we can replicate intelligence but no further? Please, do. If you say that machines and people are fundamentally different then you must have a way to tell the difference between the two. What is that way and why do you deem it appropriate?TheDude wrote: ... (I understand that artificial intelligence can learn, but it is only within very strict parameters. It isn't true knowledge, it's pattern recognition and fitting those patterns into other presets.)...
Yes, what about those things?And even if a computer were to perfectly match a human brain, what about the mind/soul/elan vital?
Well, we know that brains exist. And we know that computers exist. We know that both operate on a fundamentally binary logic in all of their parts with no exception in any place. We also know there is one difference: We designed computers so we know everything they do and can do, to one extent or another, but we did not design our own brains so we have to work on understanding their structure. On their own these facts seem to support that artificial brains are in no sense inferior to natural ones. In fact, since - unlike evolution - we can go back to the drawing board and improve our designs from scratch rather than having to tweak what we already have, chances are artificial brains can potentially be far superior to natural ones and I'm not talking math and logic here but pretty much everything a brain does. Then again, given the same complexity chances are there will be similar mistakes they will make because of lazy shortcuts in thinking. Emotions and genuine understanding will also replace rational thought and inquiry and biases will arise the same way they do in us. Now, you say there is still something about the human brain fundamentally different and it is not just your desire to feel that somehow your species is special. What is it then?The proposition that AI could rival human intelligence/experience/whatever is based on a presumption of non-dualism in the Cartesian sense, which I'm not so quick to buy into.
Better to leave questions unanswered than answers unquestioned
Please Log in to join the conversation.
Gisteron wrote: Well... You say it's "strict parameters" and limited... But that's just you saying it. You arbitrarily decide what is strict and limited and you make it sound like people aren't limited in the same way. Do you know how easily children can be manipulated? Heck, do you know how easily adults can? Where exactly is this line between the robot-like and the human-like? And what of animals who are less intelligent than ourselves, can you draw a line and say until there we can replicate intelligence but no further? Please, do. If you say that machines and people are fundamentally different then you must have a way to tell the difference between the two. What is that way and why do you deem it appropriate?
I would argue that any intelligence which exists beyond simple instinct is beyond the ability of AI. Yes, humans can be manipulated, but they are still capable of original thought. Even dogs are, and have displayed emotions and learning capabilities. The difference between organism and machine is that the machine is programmed to go through with functions based off of specific events and does not have the ability to improvise based off of original thought.
The perfect humanoid robot will be capable of fulfilling all of the functions of a human being. But it does just that: it fulfils functions based off of specific events. Human beings can wake up in the morning and say "I'm in the mood for pancakes" or "I feel like eating oatmeal today", based entirely on a whim and not based on a pre-coded inventory of possible selections. Even if you were to say that we choose what to eat based on what we know exists, we still have specific and seemingly random cravings for certain meals that a machine simply doesn't have.
The human or gorilla or dog has a mind, in the Cartesian sense, separate from the body. The only problems with Cartesian dualism that I've ever run into are pointing out how the theory could be improved, not pointing out unsolvable issues within the theory itself. Even if we perfectly replicate the brain and body, we still don't know exactly what constitutes the mind, and so it would be impossible for us to replicate (since we don't know what we're replicating).
My argument has nothing to do with the brain at all, nor does it have anything to do with the body. There is nothing special about these things. It is about the mind, which I wholly believe based off of reasonable argumentation by many philosophers is separate from the body (and the brain, specifically).
First IP Journal | Second IP Journal | Apprentice Journal | Meditation Journal | Seminary Journal | Degree Jorunal
TM: J.K. Barger
Knighted Apprentices: Nairys | Kevlar | Sophia
Please Log in to join the conversation.
The propositions of the existence of a mind and of its being separate from the body are both synthetic propositions. They can therefore not be in their entirety assessed by logic alone. I hear Descartes whispering "cogito, ergo sum" behind the corner, but that is a conclusion about the presense of something somewhere and does not apply to specific synthetic problems within the one given reality we share, on whatever level it is indeed real. Thus I present the following challenge to you:
In front of you are two things. They look identical and they talk identical. They make the same choices in the same situations and they make equally random choices absent superficially identifiable stimuli. Let's call them person A and person B, respectively. Let person A have a mind and let person B not have one, either because its brain was traumatized or because it is an imperfect clone or because it is an artificially created organic being altogether. You have the freedom to perform any finite amount of measurements with any and all currently existing and hypothetically possible measuring equipment. You can perform any finite amount of any experiment, including dissecting the subjects. You may make a finite amount of finite distance voyages to any place on Earth, another hypothetically possible planet or outter space provided that you and everything alive at the beginning of the journey remains alive at the end of it. I grant you that all living things can live eternally for that purpose unless they are damaged beyond repair and killed in this way (I.e. you can't dive into the sun or a black hole or whatever and you can't put the subjects through a shredder and not put them back together afterwards). You may ask both person A and person B any finite set of questions and you are aware that one of them has a mind while the other one doesn't. At the end of all this your job is to have identified who is person A and who is person B. How do you proceed?
This challenge works with souls and free will, too. Just because you don't understand what made you want pancakes or oatmeal in a morning doesn't mean it is random, and even if it is, that doesn't make it free and subject to a human essence. If you say there is a fundamental difference between humans and machines, then you must have a way to tell that difference - how would you otherwise know that there is one?
Better to leave questions unanswered than answers unquestioned
Please Log in to join the conversation.
Gisteron wrote: But of course we can also program other ones to have memory to where they actually learn from both mistakes and successes. We know this because it already happened. So the things you say about no original thoughts and no learning capabilities are frankly wrong and I don't see why you'd need to say false things.
Completely disagree here. We'll talk about original thought first.
The chess program knows the rules of chess and the requirements for victory and defeat, as does any chess player. So you're right again in stating that the function of the human in this instance can be matched with the function of the machine. It's a witty example, but I''m afraid you're missing the point. The chess machine is programmed to make movements based on the rules of the game, and it's probably set up with some openings and rules of thumb as well. But that's all it does. It looks at the game and moves based on the rules of the game and the options available to it at the time which fit in with the rules of thumb that are programmed in. At no points does it say "This is boring. Do you want to play Shogi instead?". At no point does it say "Why don't we stop for a coffee? I'll have two creams and one sugar." At no point does it say "I can see you're obviously the better/worse player here. There's really no point in continuing." At no point does it goad you into making poor moves or try to trick you through any form of message. Because it's not programmed to do that. It's not going through any original or personal thought, it's just making moves based on the moves available to it. In a real chess match, you've always got the option to flip the board over and walk away. That's original thought. But the chess program is made to do certain things; its abilities stop once those things aren't required. In this case, it's playing chess. Even if we grant that it displays strategies, those strategies are programmed in or made by chance based on statistical probabilities.
What about learning and memory? While it's true that we've seen AI that can adapt to situations, such as in some video game AI, that AI still has incredible limits. It will have a set list of things that it can do: go up, go down, etc. The player may do many more things, such as glitch exploitation, manipulation of the physics engine (have you seen computer players in Super Smash Bros Melee compared to real professional players?), and so on. The AI will not learn these things even if it sees them because that stuff isn't programmed into the bot. I'm not saying that AI can't store information, recall information, or anything like that. In fact, AI would be perfectly acceptable in a math class or a history class based on didactics as a teaching method where you take in information and spit it out later. But could it go through the Socratic method? If an AI not programmed with geometric equations were to talk to Socrates, would Socrates be able to teach it geometry, so that it could solve geometric equations and explain how they work? I don't think so, and I've certainly never seen any example of an AI doing so when it wasn't specifically programmed to learn geometry.
It would be impossible to proceed. Dissection or physical examination would be of no help whatsoever while trying to determine the presence of something non-corporeal. In addition, any task of sensory experience, whether it be conversation or otherwise, would be useless while trying to discover something which exists separate from sensory experience (and is non-corporeal), and if I were to go by Descartes's strategy of doubt, it would not be possible for me to verify whether or not person A or B had a mind or any true form of existence independent of my sensory experience. This is also applicable to any being or object other than myself.How do you proceed?
What's of interest here is that you're suggesting that I make an experiment based off of empirical data in order to determine the validity of something which itself questions the validity of empirical data!
There's an old joke about deterministic philosophy professors failing students for cheating on tests.This challenge works with souls and free will, too. Just because you don't understand what made you want pancakes or oatmeal in a morning doesn't mean it is random,
But I never claimed that the mind/soul/elan vital is found only in humans. I said that it is inherent to living beings who have thoughts that go beyond simple instinct. So with your question, I'll regard it as if you said that instead of human.and even if it is, that doesn't make it free and subject to a human essence.
First of all, I'd like to say that the burden of proof lies on the one making the positive claim. That is to say that if you come to me and say "this AI is capable of thought in the same way that a human is", I would ask you for the proof, and furthermore I would ask for proof that the AI was actually thinking instead of just taking in information and giving out a response from a set list of possible responses. Being as that list could theoretically be incredibly long for any given question and that speaking conventions could be programmed into the AI, a set amount of times asking the same question could result in the AI switching to another set of "angrier" or more aggressive answers, etc., I think you would not be able to actually prove to me that the AI was capable of true thought. Same thing that happened with person A and person B. You would be unable to prove it, and since the burden of proof would fall on you (as you're making the positive claim), we wouldn't get anywhere.If you say there is a fundamental difference between humans and machines, then you must have a way to tell that difference - how would you otherwise know that there is one?
Were I scientifically minded, I would say that I would know that there's a difference based on inductive reasoning. But I'm not really into that. Of course, I could make a snide remark saying that one fundamental difference is that one is mechanical and the other is organic. But what you're asking about is mental capabilities, so I'll say this:
Cartesian dualism or any other form of mind-body dissonance MIGHT exist.
If it exists, then it should be clear from Descartes (meditations 2, among other publications) that humans possess a mind.
If it doesn't exist, then a computer could theoretically be made which copies the function of a human body (including the brain), and it would act just like a human being.
However, if it exists, due to its non-corporeal nature, it could not be recreated in a machine. After all, we would have no way of studying it, and could only make an approximation. Even if that were the case, it is beyond our ability to create something that is non-corporeal, and if I'm wrong about that, point it out to me.
It follows that if there is a mind as separate from the body that an AI would not possess it, while humans would, and that is the fundamental difference between man and machine.
If you would wish to convince me that singularity could exist, you would have to first defeat the argument from Descartes. Of course, I realize that Descartes is making a positive claim and that the burden of proof is on him -- I believe he proved himself in his writing, and he did respond to many people who had issues with his idea of the separation between mind and body. Unfortunately, he's not here right now.
First IP Journal | Second IP Journal | Apprentice Journal | Meditation Journal | Seminary Journal | Degree Jorunal
TM: J.K. Barger
Knighted Apprentices: Nairys | Kevlar | Sophia
Please Log in to join the conversation.

Please Log in to join the conversation.
- Whyte Horse
-
Topic Author
- Offline
- Banned
-
- Do not try to understand me... rather realize there is no me.
- Posts: 1743
We also know much more about the brain now. Here's an article telling about an AI brain simulation that does fluid reasoning.
Spaun Article
Few are those who see with their own eyes and feel with their own hearts.
Please Log in to join the conversation.
Now, when it comes to the socratic method, I frankly do not know what it takes to program a machine that could operate within it. Surely no self-awareness is required, but then again, many people are having trouble either performing it or learning from it being performed upon them. It was original and new back in Plato's day, but it is not an end-all solution to all learning problems everybody ever had.
Geometry on the other hand can easily be taught to anything that has a remotely logical brain, because it follows necessarily and inescapably from set theory alone. Birds and bats and many other animals who show no sign of a mind by the standards you set in your previous post can still apply, consciously or instinctively, geometric algorithms to navigate. Now, can they do seven-dimensional algebra? No. They didn't evolve to, because they don't need to. But we can and I see no reason why computers couldn't in principle. Which leads me into the next point.
The burden of proof: Now, I don't say that computers can be everything people can be. Nor have I pointed to one that is. Indeed, as far as I know, the ones we've built so far cannot be; although almost every part has been replicated to some extent or another, the entirety hasn't yet. The positive claim you are making is not that it would be impossible. If I argued against that, I would have to show the possibility and the best I can do for that case is a fallacious argument from ignorance. What you did claim however is the reason why it is impossible: That there is something about us that is fundamentally lacking in any conceivable machine and you chose to call it the "mind". To recap, I challenged you to devise a way to identify a mind when you see one, and apparently you cannot. Now, you do say that it questions the validity of empirical data, and that is fine. However, by doing so, you are basically admitting that there is no way of identifying that difference you are talking about and that frankly you don't have a way of knowing that it is actually there. As a result, the concept is rendered irrelevant. If you are saying that the only remaining distinction between a human being and a machine at some point could be something that makes no identifiable difference, I think I might just come onboard since for all intents and purposes we are in agreement from that point onward.
Oh, but wait - there is more!
There is nothing "scientific minded" about saying that you know something based on inductive reasoning. Science doesn't know by reason, it knows by being able to demonstrate. So does math though, so let's be more specific: Science knows by evidence. You know, the sort of empirical data you say your claim of that difference flies in the face of. Sorry, you can't have it both ways. Since you said you weren't much into this however, neither shall I dwell too long on this, so let's move on to Descartes... Let's see... James Maxwell was born some 181 years after Descartes' death. Carl Linnaeus was born a mere 57 years after Descartes left this earth. 172 is the difference to Gregor Mendel, 159 to Charles Darwin and Isaac Newton was as young as seven by the time Descartes passed away. Now, his contributions to mathematics all set and done, I do not think that he is an authority on either electricity or robotics or biology or medicine or even psychology, let alone neuroscience. Therefore whatever he might have said on these topics should still be double-checked regardless of just how reasonable it seems on the surface. Now, if you wish to present a particular argument of his in favour of your assertions anyway, please, go ahead. He is not here to do it himself, so I shan't go back to his work to then argue with a dead man. You are here, you think he has something interesting to contribute, so you represent him, if you so wish. Seeing how the mind by your own admission is untestable by virtue of and in addition to it making no difference that could be verifiable even in principle, I don't see why you'd need to, but that is your choice to make. In the mean time, the situation we find ourselves in with this argument is one that I can lazily play in my favour in the following manner: To Jedi one would assume it necessary that there be a life essence of sorts, but given the version of it that you suggest, I'm afraid I must say Occam's Razor disposes of it rather snappily.
Better to leave questions unanswered than answers unquestioned
Please Log in to join the conversation.
Please Log in to join the conversation.