Technological Singularity

More
19 Aug 2015 17:12 #200360 by Gisteron
Replied by Gisteron on topic Technological Singularity
So quickly a few words on philosophy of science, though I think that this is perhaps a conversation better held over PM, in a different thread, or in chat or on the hangout. Suffice it to say that deductive and abductive reasoning are no lesser part of science than induction and in fact the entire enterprise of testing and retesting would have no purpose whatsoever, were it not for prior deduction in particular. Philosophy of science would also have it that science is an enterprise aiming for truth when in fact truth has little to no relevance in any actual scientific study aside maybe from history, but that's beside any point you made, so I shan't press it.

Since we were discussing the technological singularity (which is a synthetic matter, by the way, and thus a matter of epistemic filters beyond mere logic and completely outside of metaphysics altogether), I thought we operated on the base assumptions that reality was in fact real. If you wish to go down the solipsist route instead, be my guest, but I shan't follow. I have made my case, you have made yours and now that I ask for consistent standards, you ignore it, despite quoting it right back to me. You make up more arbitrary and ill-defined standards (like "intuition" or "soul", as if you could demonstrate that you yourself had any such a thing) as you go, and I'm frankly done pursuing your constantly shifting goal posts if you fail to acknowledge how your previous ones were already hit in the very quotes you throw right back at me! And that is setting aside how in the meantime you try to slip down into the solipsist rabbit hole because that is the only escape out of the corner you now find yourself in.

I didn't think I need to assert that reality is real, but now in order to escape the inevitable you assume that it isn't, leaving me with the burden to prove it to you. You are shifting the goal post again, but this time I am frankly comfortable letting you do that because if denying reality is what it takes to keep your case afloat, I think it high time that I rest mine.

Better to leave questions unanswered than answers unquestioned

Please Log in to join the conversation.

More
19 Aug 2015 18:20 #200364 by TheDude
Replied by TheDude on topic Technological Singularity

Gisteron wrote: So quickly a few words on philosophy of science, though I think that this is perhaps a conversation better held over PM, in a different thread, or in chat or on the hangout.


Agreed.

Since we were discussing the technological singularity (which is a synthetic matter, by the way, and thus a matter of epistemic filters beyond mere logic and completely outside of metaphysics altogether), I thought we operated on the base assumptions that reality was in fact real. If you wish to go down the solipsist route instead, be my guest, but I shan't follow.


I didn't mean to invoke solipsism. I merely presented the strategy of doubt and how it lead to the mind-body problem in simplistic terms. And in this situation, speaking of singularity (not simply a powerful AI, but actual technological singularity) I think it is a necessary discussion to have. As my original argument went, yes, you may be able to recreate a human brain in a machine, but you cannot recreate the mind, and this is why we won't see singularity.

I have made my case, you have made yours and now that I ask for consistent standards, you ignore it, despite quoting it right back to me. You make up more arbitrary and ill-defined standards (like "intuition" or "soul", as if you could demonstrate that you yourself had any such a thing) as you go, and I'm frankly done pursuing your constantly shifting goal posts if you fail to acknowledge how your previous ones were already hit in the very quotes you throw right back at me!


All I have seen is a call for radical empiricism and as a result I am not satisfied with you "hitting" the goal post. And there is a difference between moving goal posts and natural progression of discussion. My fundamental stance still has not been addressed, that stance being a mind as separate from the body from the basic cogito, ergo sum argument as seen in meditations 2 (wherein the phrase did not actually appear). You have yet to provide a strong argument against it, merely calling it irrelevant due to your tendencies towards empiricism. And, if empiricism is demanded of you, then I demand to know why you hold so strongly to this pragmatic viewpoint. I think that is only fair, as I have justified my Cartesian stance well enough. If you don't want your empiricism challenged, then don't demand it at every point, and it won't be addressed.

I didn't think I need to assert that reality is real, but now in order to escape the inevitable you assume that it isn't, leaving me with the burden to prove it to you. You are shifting the goal post again, but this time I am frankly comfortable letting you do that because if denying reality is what it takes to keep your case afloat, I think it high time that I rest mine.


Oh, I never denied reality. I simply pointed out that you can't sit smugly and rely on Occam's Razor, and that I can just as easily turn that around on you -- which I did. Now you reject the criticisms based on... what? That you don't like the fact that a materialist, empiricist viewpoint can be disagreed with? Of this I am not entirely sure.

Regardless, I have done as you asked and provided the argument by Descartes which you insisted on. If there is a mind as separate from body, then human beings cannot create an AI capable of singularity. You agreed to that point already. And, I have provided an argument for a mind as separate from the body, which you have not addressed.

1. There is a mind as separate from the body. (Descartes, meditations 2, as previously written and simplified.)
2. This mind is non-corporeal.
3. Human beings cannot create non-corporeal things.
4. :. Human beings cannot create a mind.
5. Singularity would require that an AI would operate in the same way as a human mind/brain combination. (In other words, that singularity would require a mind -- something you've already agreed to.)
6. This AI would have to be created by humans. (Unless you argue aliens -- then just replace the word "human" with "alien" at every point and the argument is the same.)
7. Being unable to create a mind, humans could not create an AI capable of singularity.
8. :. Technological singularity is impossible.

Based on what I've proposed and what you've already agreed to, there is no way for humans to create an AI capable of singularity. Unless you can disprove any of those things, I don't see how you can stand against it.

Oh, and by the way.

ill-defined standards (like "intuition" or "soul"

"Soul" here is in the classical sense and just functionally means "mind", as is common in the reading of ancient philosophy. Intuition is... fairly self explanatory, isn't it?

In any case, I also pointed out that AI can't be capable of induction. And by induction, I mean coming to conclusions without absolutely certain premises. I guess they could allow for "do action x if situation y is 95%+ likely", but that's a lot more rigid than what we tend to think of as inductive reasoning, and I don't think you would disagree with me on that. If you do, tell me why.

First IP Journal | Second IP Journal | Apprentice Journal | Meditation Journal | Seminary Journal | Degree Jorunal
TM: J.K. Barger
Knighted Apprentices: Nairys | Kevlar | Sophia

Please Log in to join the conversation.

More
19 Aug 2015 20:42 #200376 by Gisteron
Replied by Gisteron on topic Technological Singularity
A conclusion from induction is not a conclusion from less than certain premises. I actually did myself program code that would save a value to a variable and then change or keep that value based on further input or in some cases drop its usage altogether, which is what you are talking about but which is not at all inductive reasoning. Induction is the method of proving general mathematical propositions by proving their applicability in particular cases and how their applicability remains unchanged for finite variations of the input parameters. Another form of inductive reasoning is the recognition of patterns in large samples and adjustment of the idealized patterns designed to predict further samples as the sample size grows. Both of these are well within the capabilities of a computer.

Now, as for the argument, here is the problem: What a mind absent a body is, is still pending a definition. You see, Descartes assumes that he actually can doubt having a brain and at the same time he cannot doubt having a mind (whatever that is). That it is even possible that the thing he would call "I" to exists whilst his body doesn't. That possibility needs to be demonstrated and until it is, I am as free do dismiss it without evidence as he is to assert it without evidence. Nor does he - in your version of the argument - demonstrate that or why we must operate exclusively on what we can know for certain, a premise everything following it is rested upon. Thanks to the is-ought problem I don't think there is a way for him to show this.
However, you already granted that you cannot tell a mindless machine from a mind-posessed human being if they act equivalently. For some reason you are saying that the singularity requires the machine to act like a human would but also that that requires a genuine non-corporeal mind underneath (which still remains undefined). You have yet to demonstrate that the latter is a necessary condition for the former - or indeed even a sufficient one.

So in short, your fundamental stance of a mind as separate from the body is not something you get to assert and demand of me to disprove. The onus is on you to define what such a thing is and that it is possible, neither of which you did either paraphrasing Descartes or on your own. Living in this reality I dare assume, as do you, for practical purposes, I know that brains exist and that some of them are too complex to be perfectly predictable with the limited knowledge we have of any individual one at any given point in time. You assume all that, plus that the unpredictability is the product of this magical entity called the "mind" that operates from within the brain but is technically not connected to it. Occam's Razor has no problems at all disposing of this additional assumption. Now, if in order to escape it's merciless cut you wish to claim that you do not assume this reality in the first place, then again, that is fine, and I can rest my case. If for the mind to be real, reality must not be real, I really don't feel like I need add anything more, or do I?

Intuition is not "fairly self-explanatory". To me it is a feeling of some degree of certainty towards an issue I do not recall being consciously familiar with. All I can say is that I do not know where my seeming knowledge is coming from. Do you want to invoke an argument from incredulity perhaps and assert that since you don't know of a genuine explanation, therefore it is the product of an unexplainable and undefined magical entity you choose to call the "mind"? Be my guest. You say this seeming knowledge of unknown origin is something machines cannot have, yet at the same time you say that machines cannot have any level of self-awareness or consciousness, so technically the way you see it they would never have anything but intuition. Trying to have it both ways again, are we?

Better to leave questions unanswered than answers unquestioned

Please Log in to join the conversation.

More
19 Aug 2015 21:17 #200377 by TheDude
Replied by TheDude on topic Technological Singularity

However, you already granted that you cannot tell a mindless machine from a mind-posessed human being if they act equivalently. For some reason you are saying that the singularity requires the machine to act like a human would but also that that requires a genuine non-corporeal mind underneath (which still remains undefined).


Alright, Gisteron. I understand your point of view and its practical nature, but I don't understand how you can consider it satisfactory. Consider this situation:
I have, by some miracle, found out everything about you. I know all of your tendencies, every aspect of your mind and body, even those things kept secret and hidden away possibly even from yourself. And I have created a machine with synthetic skin and hair and silent parts which consumes organic material after it has been prepared (cooked food) in order to fuel itself; it even excretes artificial waste (though I'm not sure you would want to verify that). It has a strong AI which perfectly simulates who you are. In essence, it is a perfect replica, though its insides are mechanical and manmade. (I don't believe that this is possible, but please go along with it.)
This thing has your thinking methods and looks exactly like you. It has even been made to gain and lose weight and age like a human being. It also has all of your memories.
Anyone in the room with it would say "Yep, that's Gisteron!" And you would protest, saying, "No, it's not me. I'm me. There's only one of me."
But why would you do this? Functionally, it is you, isn't it? It does everything that you do, thinks the way you do, et cetera. How can you possibly object to saying that it is you?
You are caught up in this mindset of "It looks like a duck, sounds like a duck; it's a duck." But what makes the duck? Is it the quack and feathers or is it the DNA/RNA, chromosomes, and so on? It seems to me that for no good reason you would be fine with saying that an AI has reached singularity even if it is only through superficial behavior, without any comment on any of the actual internal thought processes which may occur in the AI. You say this because there is no possible way to validate whether or not the AI robot is thinking.
Suppose that the robot copy of you from the example isn't really a strong AI, rather something with an incredibly large amount of "if-then" scenarios programmed into it which represent all of the actions and words that a human being will say. Then it's not really reached singularity, but it is convincing enough that any human being would think that it is, indeed, a human being. What you're suggesting is equally as unrealistic as me suggesting solipsism, since you are suggesting that we just accept this at face value regardless of the fact that we cannot EVER possibly know whether or not the robot actually is thinking!
For what reason do you think that this is acceptable, wishful thinking aside?

First IP Journal | Second IP Journal | Apprentice Journal | Meditation Journal | Seminary Journal | Degree Jorunal
TM: J.K. Barger
Knighted Apprentices: Nairys | Kevlar | Sophia

Please Log in to join the conversation.

More
19 Aug 2015 22:04 - 19 Aug 2015 22:05 #200384 by Gisteron
Replied by Gisteron on topic Technological Singularity
Well... If you ask me to prove that I am thinking myself, I couldn't do it either, because what we call "actual thinking" vs "robotic thinking" is a matter of what does the thinking, not how the thinking comes about or where it leads. Without exception, every thought I have is a result of a myriad of if-then processes, namely "if action potential is reached - then transmit signal to adjacent neurons". Not all my reactions are stored in a look-up table, but at the core, all of them are outputs of bio-electrical strictly logical circuitry, even those that on the face of it seem irrational or emotional. If my cicruits can function the way they do, I see no reason why artificial ones couldn't. I must by default assume that they could, seeing how simple natural mechanisms can be copied, and the capacity to copy does not change with small changes of complexity (yey, mathematical induction at work!).

You assume that I would object to people identifying the machine as myself, but I wouldn't. I do not subscribe to a notion that there is an essence of me-ness nothing else can share. Of course the very first memory that machine would form would inevitably be different from my own, and be it only because we cannot occupy the same space at the same time, so as time passes by one could in principle be able to tell us apart since it would grow and change differently from myself, unless we kept merging memories every now and again. I do not insist that there can be only one. Why would I? Am I this insecure that I would require a notion of uniqueness to the point where I'd deny what is actually happening? I am perfectly fine acknowledging that the machine, the instant it was created, was exactly like me and therefore everybody is justified in calling it by my name. I would probably treat it as I would treat myself and if we do indeed keep merging memories, I would be comfortable assuming full responsibility for its actions as well as giving it full responsibility over mine. It is not that there is no second one like me because I am unique. Rather the other way around: I am unique, because there is nobody else like me. If there was, whether I liked it or not, I would have to acknowledge the fact that I am not unique any longer. Chances are, machine-me would feel about it the same way I would if it were activated to be informed that whilst its memories are that of a unique being, they are in fact not unique anymore.

Better to leave questions unanswered than answers unquestioned
Last edit: 19 Aug 2015 22:05 by Gisteron.

Please Log in to join the conversation.

More
19 Aug 2015 22:31 #200389 by TheDude
Replied by TheDude on topic Technological Singularity
Of course, you'd have to be able to predict that I would respond to that neuron comment. Observing neurons is doing just that: observing neurons. I can grant that there is a correlation between thought and the firing of neurotransmitters and all of that fun stuff, but I can't grant you that the thought itself is the signal transmission between neurons. It could be the case that the thought causes the signal, that the signal causes the thought, that by some stretch of cosmic luck the signal and thought are completely unrelated but simultaneous, or many other things because, again, I find Descartes' argument convincing and see no reason to simplify everything into just body mechanics.
With that aside, I think you're missing the point entirely on my robot example. The fact is that even if you and the robot had a link where all of your thoughts were shared (let's say it's in some alternative reality where everything is the same but it exists there instead of you, or something ridiculous), or any other situation, there is still a difference between the two of you. Namely, it is composed of synthetic materials and metals, whereas you are flesh and bone. Nobody would ever know the difference between the two of you because nobody's going to take either of you apart.
There is a massive and difference between you and the copy in reality, but if you only view things from that "it looks/sounds like a duck, it is a duck" mentality, you won't recognize the difference. These two objects are not the same, but you're perfectly fine with saying that they are both the same thing because of superficial reasons, and I just don't think that's acceptable.

First IP Journal | Second IP Journal | Apprentice Journal | Meditation Journal | Seminary Journal | Degree Jorunal
TM: J.K. Barger
Knighted Apprentices: Nairys | Kevlar | Sophia
The following user(s) said Thank You: Gisteron

Please Log in to join the conversation.

More
19 Aug 2015 23:06 - 19 Aug 2015 23:10 #200392 by Gisteron
Replied by Gisteron on topic Technological Singularity
Well, no thought is one isolated signal. Rather thoughts are what we call clusters of signals too vast to trace individually. If they don't fire, thoughts don't occur. If they do fire, thoughts do occur. I find it easier to assume that there is a firing going on than that there is a firing going on and also there is a magical thought-entity floating somewhere in the aether in the meantime. I agree it is reductionist, but until you can prove to me that you are "actually" thinking you don't get to require of robots to prove to you that they are "actually" thinking. Either there is a theoretically possible way to tell the difference or there isn't.

We could of course go as far as they went in Battlestar Galactica with their Cylons and say that this machine doesn't need to be of metal but can be of artificial flesh and artificial bone and can even reproduce with those of our kind. What will you say on that day? Where is the line between superficial and no longer superficial? You see, the only reason you insist that there is a difference is just because it makes you uncomfortable to think that we would not be the only ones. And you hoped that I would object the same way you would, but I wouldn't, because my identity or notion of self-worth are in no way dependant on a notion of my very own very unique soul. I don't see myself as special, I don't see my species as special either. That is not to say they are not important to me, of course they are. It just wouldn't change a bit if tomorrow we lived side by side with robots that we could not tell from our own people. Our species is but one of many, and adding one more bothers me no more than it did my distant ancestors back when sapiens was not the only human species around. You call my reasons superficial yet there is no level deep enough to make them non-superficial, or is there? If you remember, when I proposed the thought experiment, I did offer as one of the testing methods to dissect the subjects to any level and use any equipment that either exists or is possible in theory. You call this superficial not because it doesn't go deep enough, but because of the conclusion I reach. In a sense you are begging the question, insisting that my reasoning is flawed just because it doesn't lead to the conclusion you favour and not because there is a structural weakness or failure in premise accuracy. Technological singularity is "just unacceptable" to you. I feel differently. And if you are curious, here is a crude outline of my justification.

If between two things the difference is naught, they are the same:
a-b=0 <=> a=b
That goes both for the things themselves as well as for ways of looking at them.
If the difference between having a mind and not having a mind is naught, then they are the same... And the mind might as well be naught:
a-(a+b)=0 <=> a=a+b <=> b=0

So at the end of the day, if it makes no difference, it makes no difference. That is the very definition of irrelevancy (in this context anyway). Therefore the question of the possibility of the singularity is a matter of just how well can we reproduce those things you call superficial. Until such time that we know of anything underneath, Occam's Razor frees us from the need to assume it. And so nothing but that superficial stuff remains for us to aim for.

Better to leave questions unanswered than answers unquestioned
Last edit: 19 Aug 2015 23:10 by Gisteron.

Please Log in to join the conversation.

More
20 Aug 2015 00:07 #200399 by TheDude
Replied by TheDude on topic Technological Singularity
Well, I'm sick of arguing, because it's honestly going nowhere. You're far too pragmatic to take the side of the Cartesian when it comes to the mind-body problem, and I've personally discarded pragmatism as I don't care for empiricism and enjoy ontological metaphysics far too much to be considered a pragmatist. The legitimacy of any AI depends on the view of the mind by whoever is considering it, and our approaches are opposite. There is no point in debating this topic any further between us.

But, it's been fun. I haven't had a chance to polish my philosophy/debating in a long time. I'm pretty rusty right now. We should do it again sometime.

First IP Journal | Second IP Journal | Apprentice Journal | Meditation Journal | Seminary Journal | Degree Jorunal
TM: J.K. Barger
Knighted Apprentices: Nairys | Kevlar | Sophia

Please Log in to join the conversation.

  • Whyte Horse
  • Topic Author
  • Offline
  • Banned
  • Banned
  • Do not try to understand me... rather realize there is no me.
More
20 Aug 2015 02:02 #200409 by Whyte Horse
I think it's interesting how two thought paradigms have emerged here:
1.) Singularity is impossible and therefore irrelevant.
2.) Singularity is possible and relevant.

So which is it? I used to be in camp #1 until Obama decided to fund exascale supercomputers and the Japanese ran a partial brain simulation on their petascale supercomputer. Now I'm in camp #2 and I'm a little uneasy about becoming the 2nd most intelligent species on Earth. Loads of people in camp #2 think that robots will do all the work for us and we will have nothing but leisure... but that's what was said about outsourcing, ya know? We Americans will be the managers and the Chinese and Indians will be the worker bees. Didn't happen like that. Why hire an American manager when you can get 10 Chinese managers that are bilingual for half the price? Same goes with robots.

Few are those who see with their own eyes and feel with their own hearts.

Please Log in to join the conversation.

More
20 Aug 2015 02:28 - 20 Aug 2015 02:29 #200412 by Adder
Replied by Adder on topic Technological Singularity
I guess it will only be as relevant as the information it can access to process. If we make it equivalent in physicality and cognitive function to a human, then it should be just as confused and clumsy as the rest of us
:lol:
Though yea, an unrestricted AI would be as superhuman as the tools it could integrate with. Given enough social media mining and thermal brain imaging it might even be able to read a person thoughts in real time with pretty good accuracy. I'm getting all Wintermute from Neuromancer now, but perhaps AI's will replace governments, and us humans will rally around logical forks in thinking... or at least we'll be the stupid things it tries to help grow
:dry:
Mother AI, shepherd to the sheep.

Introverted extropian, mechatronic neurothealogizing, technogaian buddhist.
Likes integration, visualization, elucidation and transformation.
Jou ~ Deg ~ Vlo ~ Sem ~ Mod ~ Med ~ Dis
TM: Grand Master Mark Anjuu
Last edit: 20 Aug 2015 02:29 by Adder.

Please Log in to join the conversation.

Moderators: MorkanoWrenPhoenixRiniTaviKhwang