Douglas Adams, whose vividly sentient android, Marvin, remained in a state of permanent, severe depression despite his planet-sized brain, famously summed up the three stages of sophistication of human societies thus:
How can we eat?
Why do we eat?
Where shall we have lunch?
In How to Survive a Robot Invasion: Rights, Responsibility, and AI, the three stages of robot sophistication David J. Gunkel proposes are somewhat parallel: What; Quasi-other; and Who. ‘What’ means tools, the robot as ‘fancy hammer‘ (coinage: Bill Smart at Oregon State University). ‘Who’ describes fully conscious beings such as Marvin, Isaac Asimov’s R Daneel Olivaw, or perhaps Martha Wells’ self-hacking Murderbot.
Gunkel sets these aside in favour of the ‘Quasi-other’ middle ground. But as the number of robots navigating human society continues to increase, and as their manufacturers continue to focus on making them increasingly humanoid in presentation and response, there will be problems.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
This is ground frequently covered at the annual We Robot conference, founded ten years ago to identify and solve in advance the legal and social conflicts that the increasing number and sophistication of robots will bring. Like Gunkel here, many We Robot papers (for example, by Kate Darling, whom Gunkel quotes) consider the problems deriving from human relationships with robots. Our tendency to anthropomorphise may help us to treat (selected) animals better, but it’s distinctly unhelpful when the robot being anthropomorphised is meant to blow itself up detecting landmines by stepping on them, and the people getting sentimental are the military soldiers whose lives the robot is saving.
This is Gunkel’s main argument: the problem with those ‘quasi-other’ robots is not them, it’s us.
A third way
Gunkel himself has trodden this path before, notably in his 2018 book, Robot Rights, in which he argued both the case for and against awarding these manufactured artifacts some form of legal personhood. Thankfully, Gunkel does not spend time arguing about whether it’s good or bad for the robot; what interests him is the effect on us of either treating increasingly ‘alive’ tools as wholly-owned property or awarding them far more sentience than they possess.
In this new book, Gunkel proposes a form of joint responsibility — a third way between the ‘fancy hammer’ and legal personhood. Either of those ends of the spectrum poses difficulties. Would you want Microsoft’s experimental Twitter chatbot, Tay, which was rapidly turned into a hate-monger by the humans interacting with it, to be able to claim free speech rights as part of its legal personhood? Conversely, it’s easy enough to hold a manufacturer accountable for a hammer whose head flies off when you use it to pound a nail, but, as Gunkel explains, citing Miranda Mowbray, the unpredictable confluence of machine learning and variable circumstances can create problems that are literally no-one’s fault.
Unfortunately, Gunkel stops at this idea of joint responsibility without exploring it in full. In yet another We Robot paper, Madeleine Elish developed the idea of moral crumple zones — the recognition that in a human-robot system it will be the human who is blamed. Without careful safeguards, all the pass-the-hot-potato problems we complain about with biased algorithms and social media business models will be repeated with robots, only more so.
RECENT AND RELATED CONTENT
Read more book reviews