
- Free Article: No
- Contents Category: Technology
- Review Article: Yes
- Article Title: Moral machines
- Article Subtitle: Why technology needs philosophy
- Online Only: No
- Custom Highlight Text:
We like to think that we would stick up for ourselves after being wronged. No one wants to be a coward. Often, though, faced with the realities of power, wealth, and superior resources, we shrink from the good fight. More worryingly, humans can misdiagnose or externalise an issue, rationalising it away. We take a problem grounded in interpersonal relationships, politics, or some other social arrangement, and convince ourselves it is an objective, natural state of being. After all, as distinguished artificial intelligence researcher and author Toby Walsh, author of Machines Behaving Badly: The morality of AI, says: ‘We are, for example, frequently very poor at explaining ourselves. All of us make biased and unfair decisions.’
- Featured Image (400px * 250px):
- Alt Tag (Featured Image): Dante Aloni reviews 'Machines Behaving Badly: The morality of AI' by Toby Walsh
- Book 1 Title: Machines Behaving Badly
- Book 1 Subtitle: The morality of AI
- Book 1 Biblio: La Trobe University Press, $32.99 pb, 275 pp
- Book 1 Cover Small (400 x 600):
- Book 1 Cover (800 x 1200):
The trouble with machines is that we can’t make them moral – at least no more so than ourselves. ‘We cannot build moral machines, but we will let them make decisions of a moral nature.’ If machines are reflecting our human failings, the way forward is to rethink or rediscover morality. The first step would be to lay out a clear vision of what makes a moral human being and society, and then explain how machines could fulfil such goals. Walsh’s prose is dexterous when mulling over the limitations of a specific technology, like self-driving cars. But his analysis of AI ethics gets bogged down when he compares lists of principles produced by technologists and technocrats, from Asimov to the European Union. Walsh’s reliance on lists throughout the book limits the imaginative scope of his critique and stalls the writing. For example, Walsh states: ‘Autonomy, on the other hand, is an entirely novel problem.’ This might be convincing if Walsh meant only in relation to technology. But based on the counter examples, his meaning is that autonomy is a completely novel problem brought about by AI. Walsh is an engineer, not a humanities scholar. But theorising on the political concept of autonomy – the ability to make an informed, uncoerced decision – has been with us since the Ancient Greeks. Couldn’t their wisdom assist in the labour of love that is AI development? Like any other form of labour, it is characterised by relationships. And relationships are subject to that sociological favourite: power. For Walsh, ‘power does not trump ethics’. Maybe it shouldn’t, but all too often it does.
Take an example that Walsh offers about the ethical outcomes of his own AI research. He has written algorithms to tackle ‘travelling salesperson problems’, where the best route for a fleet of trucks is calculated. Walsh’s algorithm, and others like it, can routinely cut transport costs, total kilometres, and fuel emissions by ten per cent. This is a win for CEOs, drivers, and the environment – until you consider the long history of new technological efficiencies that only extend exploitation, consumption, and surveillance. These are concerns which Walsh discusses in other sections of his book, and which he wants to see changed. But any reckoning with AI has to consider the interconnectedness of algorithmic systems, where efficiency gains in one area lead to effects in others. This is why something like Amazon’s same-day delivery doesn’t result in less consumption but increased algorithmic sorting and advertising on their shopping platform, which leads to more trucks on the road. The tension between ethical AI and power lies in the path of cascading algorithms.
Walsh argues that we should hold AI to higher standards than humans. First because, unlike us, machines are unlikely ever to be held accountable. This isn’t simply a legal loophole, but a practical problem of the limits to machine intelligence in relation to consequences. ‘Machines do not suffer or feel pain. They cannot be punished … They are made of the wrong stuff to be moral beings.’ AI discipline does not work.
Second, AI should be held to a higher standard because, in Walsh’s view, we can legally and technically. Practically, this would require sweeping regulatory and technical intervention in the production and application of machine learning. One innovation that Walsh thinks has real promise is moving from cloud-based AI to ones housed in personal devices, disconnected from the broader ocean of user data. Who will build these devices when the economic model of technology companies relies on the network effects of constant, shareable surveillance?
It is fitting that Walsh is so concerned with AI’s effects on climate change. He is in the same embattled position as climate scientists explaining global warming to sceptical audiences in the early 2000s. Walsh is at his most insightful and engaging when explaining the complex decision matrices of AI. And there is a need for talented technical educators to explain these baffling technologies. Yet, even at its most reasonably optimistic, Machines Behaving Badly never goes beyond the ethical haunts promoted by Silicon Valley.
Comments powered by CComment