Tuesday 22 October 2013

Taking new technologies to court (Telegraph)

From self-driving cars to man-made consciousness, science is about to unleash a host of legal dilemmas

If a pedestrian is killed by a robot car, who is liable? The “driver” (who may have been, quite legally, asleep or working on a laptop)? The owner? The manufacturer? The operator of the GPS network?
If a pedestrian is killed by a robot car, who is liable? The 'driver' (who may have been, quite legally, asleep or working on a laptop)? The owner? The manufacturer? The operator of the GPS network? Photo: Alamy
It is often said that hard cases make bad laws. After a series of well-publicised canine maulings, the Dangerous Dogs Act was passed in 1991, and proved to be a disaster. There are now calls for new laws to deal with the alleged menace posed by the internet – but most cybercrime is merely harassment, abuse, fraud and theft, perpetrated with novel machinery.
But sometimes new technologies really do open up a whole new legal playing field. For example, up to the late 19th century, the rules of the road were written with horses and pedestrians in mind. These struggled to cope with the advent of self-propelled motor vehicles; hence Britain’s Locomotive Act of 1865, which limited such vehicles to 4mph in the country and 2mph in towns, and insisted one of their crew walk 60 yards ahead carrying a red flag. Similarly, the invention of the aeroplane forced America to scrap a law which deemed that the airspace over anyone’s land was their property.
Now a series of what have been dubbed “disruptive technologies” are threatening to rewrite the legal rulebook again. What distinguishes these innovations is first that they are adopted rapidly, and second that, rather than improve upon existing technologies, they completely replace them (as the car did to the horse).
As we stare into a future of automation and genetic augmentation, of new robotic and reproductive technologies, some experts believe that the 21st century is going to be a boom time for lawyers as judges struggle to keep up.
Perhaps the first disruptive technology to give our learned friends something to think about will be the self-driving car. The capability to mass-produce completely autonomous automobiles, guided by the GPS network and on-board sensors, has been in place for several years, thanks in part to work by Google, whose self-driving Priuses and Ford Focuses have been trundling around California for some time. Legal worries, not practical issues, have delayed implementation of a technology that many experts believe could save tens of thousands of lives a year (more than a million people are killed annually on the roads, and nearly all fatal accidents are down to human error).
The problem is, if a pedestrian is killed by a robot car, who is liable? The “driver” (who may have been, quite legally, asleep or working on a laptop)? The owner? The manufacturer? The American government, which owns and operates the GPS network?
According to Burkhard Schafer, a legal academic at Edinburgh University, some of those old horse laws could still work, even in the age of the driverless car. After all, horses are autonomous, potentially dangerous means of transport. And the law supposes a degree of common sense by all parties: “You don’t approach a horse from behind,” Schafer told New Scientist recently. Similarly, he says, you wouldn’t run out in front of a speeding robot car – it would be unreasonable to expect its systems to be able to defeat the laws of physics. If a horse owner can be shown to have maltreated his animal, or trained it poorly, he can be held liable for injury – just as a robot car owner could be liable if he had interfered with its electronics or failed to have it serviced properly.
But sometimes machines do things that no horse can do. It is possible to be libelled or slandered by a machine – with no malice by any human agency. People have successfully sued Google, for example, because its search algorithms have linked them to criminal namesakes.
A fascinating (but wholly speculative) topic was recently discussed by a group of futurologist-lawyers, who have since 2006 been meeting at annual “Gikii” conferences to discuss law in a future world. What would happen if someone invented a teleportation device capable of moving humans instantly from one place to another?
The question is not (entirely) moot; since the late Nineties, scientists have used a technique called quantum teleportation to “beam” the exact states of various sub-atomic particles from A to B. In 2004 Austrian scientists managed to teleport a whole atom and many believe that within decades, it will be possible to transport DNA molecules and even viruses in this way.
Scaling this up to a human being is a formidable challenge (some calculations say that the computer needed to handle the number-crunching would need to be bigger than the known universe). But put that aside for now; and consider the legal issue of what happens if you teleport a person, and that the process malfunctions?
Most imagined teleportation devices rely on the destruction of the original object, in location A, and the transmission of its properties (the type, location and quantum states of every atom in that object) to location B, followed by reassembly. The most obvious malfunction – total failure of the device – poses no legal problems. The victim has simply been killed by a faulty machine and his relatives will sue accordingly.
But say something goes wrong, and the original “you” is not destroyed, yet a new “you” is created on the other side of the world. Assuming that thoughts, memories and personality are teleported along with the physical body, who is the real you? Does the teleportation company have the legal right (even duty) to say to the person at location A (who expected to wake up in location B) “step this way sir, this won’t hurt a bit”, and then quietly eliminate them?
Perhaps the most fascinating legal challenge will be posed by the discovery – or advent – of intelligent, conscious entities that are not human. In Lausanne in Switzerland, Prof Henry Markram’s Blue Brain project is attempting to replicate animal, and then human, consciousness in a huge supercomputer by “reverse-engineering”, via software, the physical structure of the mammalian brain.
When I met him a couple of years ago, he told me he was confident that some sort of consciousness could be achieved by 2020, perhaps even a simulacrum of human consciousness. And then we would have a problem. In 1789, Jeremy Bentham wrote of animals: “The question is not can they reason? Nor, can they talk? But can they suffer?” If we can build a conscious machine, then we can make it suffer.
Or consider the discovery of alien life. Even the existence of Martian microbes would create a legal minefield. Who would be held responsible for, say, inadvertently wiping out the Martian biosphere by sending an unsterilised space probe to the Red Planet, replete with earthly bugs? And say we discover intelligent aliens (or they us), via radio signals or robot probes? What rights would a future alien have should it brought here?
Back on Earth, some apes are as intelligent as three-year-old children – and in Japan, scientists have shown that chimps can easily beat any adult human in a series of short-term-memory tasks. An African Grey parrot called Alex, which went to meet its maker in 2007, acquired a working vocabulary of 100 English words. No one is disputing that the average human is brighter than the average ape or bird, but it is clear that in some ways, some apes and some birds are brighter than some people – people who nevertheless have legal rights afforded to no animal.
The most uncomfortable legal challenge would be the recreation of an extinct hominid species. We are a decades away from cloning an “ape man”, but DNA from ancient Neanderthal bones has been sequenced, and it is not impossible that one day someone will have a go. The Neanderthals buried their dead, made tools and fire, and may have had language. But they were not us. Technically, any revived Neanderthal would be classed as an animal. Until he got himself a very good lawyer, that is.