Culture of rat brain cells stained with antibody to MAP2 (green), Neurofilament (red) and DNA (blue)

Suppose you accept the idea of the singularity, that we’ll pretty soon come to a point where machine intelligence equals and then surpasses human intelligence (or if not pretty soon, then at some point in the future if we don’t first destroy ourselves), what would the consequences of that be?

Who would be running the world? Who, or what, would be generating the wealth, and who would be claiming benefits?

Would a super-intelligent machine-run society be democratic, and what would democracy look like in that kind of society? Would humans be allowed to vote? If being eligible to vote meant being able to demonstrate an in depth knowledge of political affairs then most humans may fail – though you could have humans with augmented intelligence. Imagine an app that, via a normal-looking set of headphones, could pick up certain thought instructions you sent to it, so you could think: “Google ‘economic strategies for deficit reduction’” and you’d get a list of links appear in front of your visual cortex. Then you could think “click on the second link down” and so on.

If you were wearing a set of headphones like that then even if you were the most ignorant idiot ever to walk the Earth, you could sound like Stephen Fry.

It might be that as with the industrial revolution, the AI revolution will create as many jobs for humans as it destroys, but if any task requiring intellectual capabilities can be done better by a machine it’s hard to see what these jobs might look like. Perhaps there’ll be a large busking economy which people parade their talents or their lives, and probably in many cases their bodies as well, on pay-to-view channels.

But that’s only going to work if the people wanting to view such things can afford to pay, either directly of via state provision. If AIs are accorded citizenship and are paid salaries, it’s likely that AIs rather than humans will become the highest earners paying the highest rates of tax.

So if by then we’ve established the principle that those in receipt of benefits are shirkers and scroungers then that could come back to bite us. Unless, that is, the AIs with their advanced intellect also have a morality that’s superior to ours. They’ll be like our children, though we ought to expect that the moral system giving birth to them will influence the moral system they’ll come to develop.

As in the Channel 4 series Humans, there may be an emancipation struggle, with conscious thinking feeling AIs demanding the same rights as humans. For some of us, that would be an egalitarianism too far. It doesn’t matter if they’re conscious or if they can fall in love and feel pain, they’re not human. We should look after our own.

Those kinds of views are going to come across as racist, and will be hard to sustain in the face of so many mixed-race, or mixed-species, people, people who are part human and part machine. Is someone with a pacemaker or a bionic hand less human than someone without those things?


Also published on Medium.

Leave a Reply