There is a strong chance that the technology to create digital beings will be available before human society is able to integrate and adjust to the current generation of AI.
So what may happen is that the way that AI really gets integrated is by actually replacing human beings who largely die off.
I find this interesting not so much because it might be wrong, but because of the passive language: "chance", "happen", "die off".
I cannot tell why the spokesmen I have cited want the developments I forecast to become true. Some of them have told me that they work on them for the morally bankrupt reason that "If we don't do it, someone else will." They fear that evil people will develop superintelligent machines and use them to oppress mankind, and that the only defense against these enemy machines will be superintelligent machines controlled by us, that is, by well-intentioned people. Others reveal that they have abdicated their autonomy by appealing to the "principle" of technological inevitability. But, finally, all I can say with assurance is that these people are not stupid. All the rest is mystery.
There was a widespread conviction that it is impossible to withstand temptation of any kind, that none of us could be trusted or even be expected to betrustworthy when the chips are down, that to be tempted and to be forced are almost the same, whereas in the words of Mary McCarthy, who first spotted this fallacy: "If somebody points a gun at you and says,'Kill your friend or I will kill you,' he is tempting you, that is all." And while a temptation where one's life is at stake may be a legal excuse for a crime, it certainly is not a moral justification.
It is fortunate and wise that no law exists for sins of omission and no human court is called up onto sit in judgment over them. But it is equally fortunate that there exists still one institution in society in which it is well-nigh impossible to evade issues of personal responsibility, where all justifications of a nonspecific, abstract nature - from the Zeitgeist down to the Oedipus complex - break down, where not systems or trends or original sin are judged, but men of flesh and blood like you and me, whose deeds are of course still human deeds but who appear before a tribunal because they have broken some law whose maintenance we regard as essential for the integrity of our common humanity. Legal and moral issues are by no means the same, but they have a certain affinity with each other because they both presuppose the power of judgment.
What mattered in our early, nontheoretical education in morality was never the conduct of the true culprit of whom even then no one in his right mind could expect other than the worst. Thus we were outraged, but not morally disturbed, by the bestial behavior of the stormtroopers in the concentration camps and the torture cellars of the secret police, and it would have been strange indeed to grow morally indignant over the speeches of the Nazi big wigs inpower, whose opinions had been common knowledge for years. [..] The moral issue arose only with the phenomenon of "coordination," that is, not with fear-inspired hypocrisy, but with this very early eagerness not to miss the train of History, with this, as it were, honest overnight change of opinion that befell a great majority of public figures in all walks of life and all ramifications of culture, accompanied, as it was, by an incredible ease with which life long friendships were broken and discarded. In brief, what disturbed us was the behavior not of our enemies but of our friends, who had done nothing to bring this situation about. They were not responsible for the Nazis, they were only impressed by the Nazi success and unable to pit their own judgment against the verdict of History, as they read it. Without taking into account the almost universal breakdown, not of personal responsibility, but of personal judgment in the early stages of the Nazi regime, it is impossible to understand what actually happened.