The singularity is, by one definition, the first point at which an artificial intelligence equivalent to that of a human is created. There are several ways this could happen, but there is one that I find particularly interesting and that I which I envisioned in a kind of post-apocalyptic story some time ago. Recently, I learnt that a rudimentary first step in realising this future has been taken by science: scientists have mapped the brain of a worm and used it to control a robot.
If we could do for a human mind what we did for the worm (and there lots of additional practical difficulties) we could create an artificial human intelligence. Except for its body and mechanical brain, they would be human. This creates a lot of interesting questions.
Ethically, these copies of humans should be given full human rights (at least if we act rationally, which we likely will not). Imagine sitting, for instance, in a device to be copied. You cannot know whether or not you will come out as “yourself” or “your copy.” If such copying succeeds it calls into question a lot regarding religion. Religion may even cease to exist. There are also a number of economic questions: copies could act as a cheap source of labour, spurring economic growth, but reducing living standards. Imagine billions of small humanoid machines in large buildings. The economist Robin Hanson envisioned how some of these economic questions may play out.
I my short story “The First Copy” (which you can read here) I examine some of the other questions with a fictional history of how the technology may develop. In fact, I guessed the first brain to be copied would be that of the worm C. elegans (the mapping that has been done is not as accurate as the copying I envisioned, but it will improve). I think it very likely that if the singularity can occur, then this is how it will be done.
Read my short story “The First Copy” and tell me if you agree.