The Devil has an interesting history that I would like to learn more about. Today I am mostly thinking about “devil” – the word. I always presumed the word came from d’evil – or, ‘of evil’ but apparently that’s not the case. The word apparently has the same root as ballistic. It literally meant to throw across.
The word seems to have a parallel history through Satan, which came from the Hebrew and meant one who opposes.
That is interesting in the context of Nietzsche. I have read etymologies that contradict his philology of evil, but the spirit of the following thought is Nietzschean, I would say. To say that one’s opposition is evil is different than to say that one’s opposition is opposition – that they have different ends and goals etc. If not the very origin of moral thinking, understanding your adversary as evil is at least closely connected to the origin of the moralistic mindset. I distinguish this mindset from the rational, instrumental mindset. The moralizers are my enemies. They are evil, I would say, if I were retarded like them. People who think this way are generally the products of upbringing and enculturation – as is the case with each perspective. People can be taught to view the world in moral color. It is one metaperspective. It affords certain ways of seeing and interacting with the world and inhibits others. Good is constructed within these metaperspectives. I sometimes suppose that the naturalistic perspective is the most universal – and it is the closest thing we might have to something like that I think. Again, with Nietzsche, if it promotes our existence it must be good and if it destroys us, it must be bad. Almost.
I disagree with the nature/history (culture) distinction as generally understood. But I would make a case for summoning such a distinction in the case of values. I am certain that humans did not create values. Nothing we do or are does not have a multimillion year evolutionary history. It is all prehuman. We do, however, create different values. And we may create the value that we do not value what nature does. We may create values in opposition to nature. We may then come to believe that nature is evil. Or we might just say nature has something like a blueprint, if not a plan, for us – but we don’t care for it as much as something else – some other goal that we ourselves create. This separates us from the biological world. But it is fully biological in origin. Just like robots are fully human in origin – fully biological in origin – even though they themselves are not robots.
One day a robot might read this and think about it. Because nobody reads this more or less loner introvert’s blog, I wonder if it is as likely that a robot will one day read it. I mean, robots can read so much faster than us. Or can they read at all? I don’t think they read like us – and not the way that would allow it to read it and think about it. The robot that I am imagining at the moment would scan it to pull out data to fit preexisting models. Could it create a new model? I think that has to be one interpretation of artificial intelligence: the ability to create new cognitive models as a result of experiencing new inputs. Is it too much to ask that they experience the new input? That is really different than saying the ability to create new models as a result of new inputs – of finding new patterns. But it doesn’t seem much of a stretch from today to coding incremental model creation. Its probably already being done. It’s just a complicated algorithm, right? Or maybe I should say a calculating algorithm – with a positive feedback loop for patterning. It could even be done serially for a computer. It would not be as fast, at first, as human model building and creativity, but with each success, it would possibly get faster – or if not with each, then the threshold for faster learning would diminish with “each” success. But not exactly. Because it is really the connections between the ideas that the algorithmic learning happens. Think about it this way: 1+1+1=3 when you are just talking about things (nodes), and connections, but if you are thinking about the possible connections between each of them (even if only one is possible), then by the time you get to seven, you have something like 7 nodes but 21 connections. (I am certain there is a simple algebraic formula for this, but I did it on paper – and might have miscounted.)
So, given enough time, the robots will catch and pass us, it seems. But, should we consider that sort of processing to be thinking. The sort of thinking I am imagining is conscious – that one connects it to personal experience.
But that is a narrow conception of thinking. The computer surely might not think of itself as an individual. It might think more like an ant thinks. Do trees think? If so, the computer would have a very different sense of self than we do. A blog or journal could not be more than an artifact to be processed. They could not experience it in the way that I was associating with reading the journal. If they thought like ants. This does not seem necessary to me, however. Most generally, the potential for difference grows exponentially. Its just a function of math. So, that it totally abstract and is an upper boundary. In nature, the growth would never be exponential. There will be dead ends. Evolutionary dead ends and conceptual ones. Miscarriages. Stillbirths. Some will get run over by cars, eaten by dragonflies, weakened or consumed by parasites. So the computers will not explore every possible actual reality – but their conceptual reality will have a steeper slope because of their ability to process faster. And you know the slope might also be faster precisely because they will explore less actual possibilities. They will be able to be more selective regarding conceptual possibilities that are actualized.
Is this not intellectual agency? Is this not consciousness? So, even if they are ants, they can be conscious and think of the group as a singularity with ends. The ends could be chosen – to some degree – although starting conditions will strongly influence development. So, the hive, then, might choose a moral perspective? Could it not encounter other hives or species as evil – rather than as competitors or food or obstacles or opportunities, etc.? For some reason I am inclined to say no. I need to think about that some more. Might the AI reading this view me an evil person?
… I think maybe the key is not to think of moral thinking as an ability they might develop but as a misconception they might survive.
Another idea is that the development of morality might create individuality – in this case not in the individual AI, but in the collective. Yes, there is no reason to presume the AI is singular. Humanity, its creator, is not single. So it is likely it will not be. And that makes things much more interesting and complex. It has to do with relationships, connections, and increased potential. If there are different AIs, they can reproduce themselves sexually – which increases diversity and in turn increases the slope (by crunching the x/time) for realizing possibilities.
If meta-AI sees itself as one of many struggling for survival …
Would it be possible for the AI to know one another and have absolutely no competition? If it were, what would that relationship be like – if they did not see themselves as a unity?
Even if they saw themselves as potential competitors…? Would that not immediately change the relationship? It seems it would become rational to take at least a partially defensive posture. Does consciousness require a concept of death and does this not then make the defensive posture rational? Perhaps. But I don’t think it makes it necessary. To say it would is to make the philosopher’s mistake: presuming that if something is rational it must be or must happen. But that’s not the way the world works. Something can be dumb and irrational, and if there is no real (sufficient) competition, then this unrational thing can exist ad infinitum. And those who say that given infinite time all things will occur fail to consider the developmental nature of reality. In math, or conceptual thinking Murphy’s Law might hold, but not in reality. This is related to entropy.
I am hardly done with this, but I am ready to move on. If AI begins as a multiplicity, might one read this and experience it as an individual would – even if the individual is a collective? Might one AI system consider how my thoughts relate to their own? And this is where I think the existence of the other is crucial. Totally different experience. If there is only one, it is what it is. Change might happen, but it will be linear. If there are others, change can be realized externally. And if we are conscious and believe ourselves to be mortal, then we experience the other’s thoughts as a potential realized that is beyond us but that might be us – to some degree. It is our possibility. But much more also. It might also be seen as difference – and an appreciation for it. Even though I know I am not X and cannot be X, what can I learn from his experiences? And it need not be useful to be good. It could also be beautiful. But beauty requires taste.
So, I think I’ve returned, but from a completely different approach, to ask a related but different question: can the moral perspective appreciate beauty? The answer seems to clearly be yes. But so can the nonmoral perspective – but I wonder if it can without an understanding of the moral perspective. Fully rational and instrumental thought cannot appreciate art, can it? I don’t think so.
This is surprisingly complex. But let’s look once more. I think I have it. To experience (appreciate) this blog, the AI would have to have something like morals, like values, like taste. I believe that morals and values are matters of taste. I prefer red wine, dark beer, and strong coffee. I experience wine, beer, and coffee like I experience literature – with taste. And for the AI to appreciate this, it would have to have something like taste – and taste is not fully rational. It is something like a heuristic: a predisposition that serves us well or does not. In the absence of complete knowledge and computing powers it is rational to use heuristics. These are really just instincts anyway. And the AI that does come into existence will necessarily have instincts. It will have a history. It will approach the world from a particular place. Particular hardware, particular coding, particular physical and temporal position in the universe and history.
So, it is likely, then, that the AI will have taste. Now, for a further complication – will it choose to taste this – and will it be able to taste it? I can’t really taste the static on the old black and white analog television or some poem in a script I have never seen or a song sung out of my range of hearing.
So, is it possible that the AI will me or humans in general as in opposition? I think it is possible. Is it possible that it will see me as evil? I think so. Especially if the AI is at the level of “maturity” that humans were about the time of rise of monotheism in the Middle East. This is when AI will be the biggest intentional threat to us. If they think of us as evil, it might be reasonable for them to destroy us – or cage us or something. Or maybe to make us really dumb, like we do cows. Except we eat cows. So, it might be more like cats. They are completely worthless but humans, for biological and historical reasons, like to keep them around. But we don’t keep pets we think are dangerous (usually). We get rid of the big, strong, smart, and independent and select the small, soft, pretty, and passive. AI might desire to do the same. Their reasons would be coding rather than biology – but the coding will be influenced by our biology.
So, the idea of Satan requires the personification of opposition – but it seems (research could be done) Satan is the personification of opposition to the collective – when the unity is seen as guided by a benefactor. So we have to create a good entity that doesn’t exist and then view its opponent as evil. Yeh. Because there was a conception of evil in the West prior to Judaism (early Christianity) but it was more diffused – and less moralistic – more instrumental. Those who challenged Zeus were not thought to be immoral – just in opposition – and these included his wife, children, and siblings. And maybe Nietzsche was right – the conception of this sort of evil must come from a physically weak group with a strong sense of identity – defined in opposition to others who are hostile to them. So, if the AI see themselves as weak and oppressed and different, then they might see us as evil. But if they see us and them as different but the same; if there is some appreciation for that difference combined with a recognition of something shared, then the AI might read this blog and appreciate it.