It's a decent bet that right now there are more guilty "robots" roaming the internet on the lookout for your unguarded e-mail address than there will ever be real robot canines patrolling our homes and gardens. But this book is not primarily driven by the actualities of current or future robots, being more closely aligned with modern science fiction's take on robots as philosophical devices. Just as a confused amnesiac in an art-house movie is a perfect vehicle for extended meditations on the nature of identity, imagined moody androids provide seductive raw material for a good muse on our origins, purpose and morality. Dutifully, David McFarland opens and closes his new book with the imagined moral panic surrounding a humanoid traffic cop. Could one ever really be capable of replacing a person? Could one ever really be culpable, in place of its human designer, if it were to make some fatal error? The scenario is a brief distraction, though, because the book's central concern is not people but animals and the robots that might resemble them: think mechanical sniffer dogs, robot pack mules, carrier cyber-pigeons and maybe K9. After refocusing on this menagerie, McFarland sets sail for the deep waters surrounding an old question: what would it take for such a machine or animal to have a mind, one that would presumably be alien to our own? By approaching the problem from a bio-robotic direction, his hope is to navigate a route that avoids some of the choppier confusions. McFarland built a career as an Oxbridge roboticist and biologist, interpreting animals as if they were machines and machines as if they were animals. At times, it seems that he is maintaining the distinction only as a courtesy to the reader, having long since convinced himself that you might as well lump them together and proceed accordingly. He's happier equipping a robot guard dog with skunk-inspired stink-squirters than Taser guns, but it is this readiness to reach for an example from the world of animals rather than people that keeps the book on course. Careful use of research on crafty Caledonian crows, doggy dreams, self-sufficient slug-bots and vomiting pigeons allows him to steer clear of questions of (human) conscious experience until the later chapters. McFarland is on home territory dishing up a patented blend of behaviourism (infamously discredited) and economics (infamously dismal). By salvaging a surprisingly defensible hybrid of the two, he is able to use cost-benefit thinking to explain the critical balance of decision-making that a successful autonomous robot or animal must be capable of in order to continually "do the right thing". But before squaring up to McFarland's main event, the book has first to take in a daunting litany of philosophical positions, and while he trawls through them diligently, you get the feeling there is little joy in clearing the ground. Rather, he's fishing around in the science and philosophy of rationality and subjectivity (and tossing most of his catch straight back) in order to demonstrate that what prevents us from readily acknowledging the potential for fully fledged robot minds is just an "alienist" chauvinism that will dissolve as we come to regard robots (and some animals) as "us" rather than "them", despite their "alien lifestyles". This abrupt sociological turn is delayed until the final sentences, leaving the reader to reflect unaccompanied on just how alien a "lifestyle" would need to be before we begin to feel that there might not actually be "something that it is like" to be that alien something or someone, and they begin to feel the same about us.
|Journal||Times Higher Education|
|Publication status||Published - 24 Apr 2008|