I don’t often find something which speaks to my two loves, sci-fi and Shakespeare, but last week the Guardian kindly obliged with their article We need robots to have morals. Could Shakespeare and Austen help? written by John Mullan.
It’s a great article, you should give it a read. Briefly, scientists are starting to think about how artificial intelligence, or AI, are going to make their decisions. What code will they follow? How will they learn right from wrong? This is a consideration which is as old as the idea of AI itself. It’s been covered famously by Isaac Asimov and his three robot laws, and also by movies like A.I., Transcendence and Blade Runner (one of my favs!).
Scientists at the School of Interactive Computing at the Georgia Institute of Technology have created a system which is intended to extrapolate moral guidance and life lessons from works of literature. Probably with intended irony, this system is named Quixote.
Quixote (as in Don Quixote) for those of you who don’t know, is an insane Spanish gentleman who came to believe he was, in fact, a chivalric knight and rode around performing honourable if archaic, deeds, duelling windmills and generally being a nuisance. Actually, the book makes a point of saying this delusion/brain fever happens if you stay up at night reading books. So readers be warned
.Basically, these Georgia scientists want to feed the system Quixote literature like Shakespeare and Jane Austen and ask it to take moral lessons and judgements from them.
The article ends pretty negatively. As an English literature academic, John Mullan is probably an expert worth listening to. He gives a damning assessment of literature as a moral compass for these robots and suggests that Isaac Asimov’s laws are probably wiser.
But I can’t help but disagree.
Why shouldn’t we use literature as a moral guide and encourage or AI to use it too? What we want is for these robots to be autonomous, but never to hurt humans. In fact, pretty similar to the message we’re giving our children. We are autonomous, free willed individuals, who understand that kicking a guy in the ribs because he got the last seat on the tube doesn’t lead to a well-functioning society.
And yet I don’t ever remember anything telling me not to kick strangers in the ribs who bother me. I’m just assumed to know.
The argument against Shakespeare as a moral teacher is obvious. Giving a computer Othello and asking it to create a code of conduct from that… questionable. As is the morality you could derive from the titular protagonists of Hamlet and King Lear. In fact, most of Shakespeare’s plays give us a shady and out of date message of good through his varied lead characters.
We, on the other hand, are able to read Shakespeare and pick out the good bits, rather than generalising that all actions from the protagonist are acceptable behaviour. But that’s because we’re not starting out in life, tabula rasa style, with Shakespeare alone.
Imagine the world if we were.
If you look at children’s books and television, you’ll see very simple messages dressed up in a story for children to learn from. It’s good to share. It’s bad to upset people. It’s good to help with chores. And fairy tales like Little Red Riding Hood; it’s dangerous to talk to strangers. E. Nesbit’s The Railway Children has an overwhelming message of putting others before yourself.
And I’m not a religious person by any stretch of the imagination, but the Bible gives us hundreds of colourful stories with a clear moral message at the end. As does Aesop.
These simple messages are drummed into children’s heads early on and give you a framework of how the world should work. It’s with this basic understanding we can then approach more complicated situations and texts.
The fact is, whether you’re a natural voracious reader, telly watcher or even someone who has learned wrong and right by others, literature plays a critical part in our morality. Religious people will reference stories from the Bible to back up their point in the same way that some of us refer to stories we’ve been told or read.
What fictional literature also does is give us bigger problems to apply mortality to, in a way we’ll never encounter in our day to day lives. Decisions which are far more complicated than simply ‘do no harm’. Your mother can tell you not to hit another child, but she doesn’t tell you what the correct thing to do is if that child is actually a secret alien baby come to destroy the planet putting billions of lives at risk. We’ll never have to face the thorny question of what we’d do if we could go back in time and meet Hilter as a child.
I don’t think there’s anything wrong in giving the system Quixote Shakespeare to read. Or Austen, Agatha Christe or even Pratchett. But I don’t think it’s fair to throw Shakespeare at, what is basically, a robotic child. The moral guidance they contain is too obscured in history and context.
Why not start it with Dr Seuss and How the Grinch Stole Christmas? Or Roald Dahl’s The Witches, where children were turned into mice? Or Enid Blyton who wrote vaguely uncomfortable stories about dolls with black faces? Or Not Now Bernard by David Mckee which told us if you ignore your child for long enough, he’ll be eaten by a monster.
What I’m saying is, if AIs are ever going to be more like us, they need to be every bit as screwed up as we are. So start with the classics.
What do you think? Should computers be taught Shakespeare as a guide to moral behaviour? By the way, does anyone else remember Not Now Bernard by David Mckee? Creepy as hell, right? XD