an arms manufacturer has
designed and built a gun turret
that's able to identify, track and
shoot targets, theoretically without
the need for human mediation.
Who will teach these robot soldiers
the rules of engagement?
On a green hill overlooking the tree-
lined perimeter of Daejeon, a city in
central South Korea, a machine gun
turret idly scans the horizon. It's about
the size of a large dog; plump, white and
wipe-clean. A belt of bullets – .50
calibre, the sort that can stop a truck in
its tracks – is draped over one shoulder.
An ethernet cable leads from the gun's
base and trails through the tidy grass
into a small gazebo tent that, in the
Korean afternoon heat, you'd be forgiven
for hoping might contain plates of
cucumber sandwiches and a pot of tea.
Instead, the cable slithers up onto a
trestle table before plunging into the
back of a computer, whose screen
displays a colourful patchwork of camera
feeds. One shows a 180-degree, fish-eye
sweep of the horizon in front of us.
Another presents a top down satellite
view of the scene, like a laid-out Google
Map, trained menacingly on our position.
A red cone, overlaid on the image,
indicates the turret's range. It spreads
across the landscape: four kilometres-
worth of territory, enough distance to
penetrate deep into the city from this
favourable vantage point. Next to the
keyboard sits a complicated joystick, the
kind a PC flight simulator enthusiast
might use. A laminated sheet is taped to
the table in front of the controller,
reporting the function of its various
buttons. One aims. Another measures
the distance from the gun to its target.
One loads the bullets into the chamber.
Pull the trigger and it will fire.
A gaggle of engineers standing around
the table flinch as, unannounced, a
warning barks out from a massive,
tripod-mounted speaker. A targeting
square blinks onto the computer screen,
zeroing in on a vehicle that's moving in
the camera's viewfinder. The gun's
muzzle pans as the red square, like
something lifted from futuristic military
video game Call of Duty, moves across
the screen. The speaker, which must
accompany the turret on all of its
expeditions, is known as an acoustic
hailing robot. Its voice has a range of
three kilometres. The sound is delivered
with unimaginable precision, issuing a
warning to a potential target before they
are shot (a warning must precede any
firing, according to international law, one
of the lab-coat wearing engineers tells
me). "Turn back," it says, in rapid-fire
Korean. "Turn back or we will shoot."
(Credit: Reuters/YouTube)
The "we" is important. The Super aEgis
II, South Korea's best-selling automated
turret, will not fire without first
receiving an OK from a human. The
human operator must first enter a
password into the computer system to
unlock the turret's firing ability. Then
they must give the manual input that
permits the turret to shoot. "It wasn't
initially designed this way," explains
Jungsuk Park, a senior research engineer
for DoDAAM, the turret's manufacturer.
Park works in the Robotic Surveillance
Division of the company, which is based
in the Yuseong tech district of Daejon. It
employs 150 staff, most of whom, like
Park, are also engineers. "Our original
version had an auto-firing system," he
explains. "But all of our customers asked
for safeguards to be implemented.
Technologically it wasn't a problem for
us. But they were concerned the gun
might make a mistake."
As early as 2005 the New York
Times reported the Pentagon's
plans to replace soldiers with
autonomous robots
The Super aEgis II, first revealed in 2010,
is one of a new breed of automated
weapon, able to identify, track and
destroy a moving target from a great
distance, theoretically without human
intervention. The machine has proved
popular and profitable. DoDAAM claims
to have sold more than 30 units since
launch, each one as part of integrated
defence systems costing more than $40m
(£28m) apiece. The turret is currently in
active use in numerous locations in the
Middle East, including three airbases in
the United Arab Emirates (Al Dhafra, Al
Safran and Al Minad), the Royal Palace
in Abu Dhabi, an armoury in Qatar and
numerous other unspecified airports,
power plants, pipelines and military
airbases elsewhere in the world.
The past 15 years has seen a concerted
development of such automated weapons
and drones. The US military uses similar
semi-autonomous robots designed for
bomb disposal and surveillance. In 2000,
US Congress ordered that one-third of
military ground vehicles and deep-strike
aircraft should be replaced by robotic
vehicles. Six years later, hundreds of
PackBot Tactical Mobile Robots were
deployed in Iraq and Afghanistan to open
doors in urban combat, lay optical fibre,
defuse bombs and perform other
hazardous duties that would have
otherwise been carried out by humans.
'Self-imposed restrictions'
As early as 2005 the New York Times
reported the Pentagon's plans to replace
soldiers with autonomous robots. It is
easy to understand why. Robots reduce
the need for humans in combat and
therefore save the lives of soldiers,
sailors and pilots. What parent would
send their child into a war zone if a robot
could do the job instead? But while
devices such as the Super aEgis II that
are able to kill autonomously have
existed for more than a decade, as far as
the public knows no fully autonomous
gun-carrying robots have been used in
active service.
Science fiction writer Isaac Asimov's
First Law of Robotics, that 'a robot may
not injure a human being or, through
inaction, allow a human being to come to
harm', looks like it will soon be broken.
The call from Human Rights Watch for
an outright ban on "the development,
production, and use of fully autonomous
weapons" seems preposterously
unrealistic. Such machines already exist
and are being sold on the market – albeit
with, as DoDAAM's Park put it, "self-
imposed restrictions" on their
capabilities.
"When we started this business we saw
an opportunity," says Yangchan Song,
DoDAAM's managing director of
strategy planning, as we sit down in a
cooled meeting room following the
demonstration. "Automated weapons
will be the future. We were right. The
evolution has been quick. We've already
moved from remote control combat
devices, to what we are approaching now:
smart devices that are able to make their
own decisions."
South Korea has become a leader in this
area of military robotics because the
country shares a border with its sworn
enemy, according to DoDAAM's CEO,
Myung Kwang Chang (a portly man who
wanders his factory's corridors trailed by
a handsome husky with bright blue eyes
whom I am warned to never, ever touch).
"Need is the mother of invention," he
says. "We live in a unique setting. We
have a potent and ever-present enemy
nearby. Because of this constant threat
we have a tradition of developing a
strong military and innovative
supporting technology in this country.
Our weapons don't sleep, like humans
must. They can see in the dark, like
humans can't. Our technology therefore
plugs the gaps in human capability."
Things become more complicated
when the machine is placed in a
location where friend and foe could
potentially mix
At the DMZ, the thin strip of no-man's
land that separates democratic South
Korea from the dictator-led North,
DoDAAM and its competitor Samsung,
who also designed a (now-defunct)
automated turret, ran some tests with
the Super aEgis II. The DMZ is the ideal
location for such a weapon. The zone has
separated the two Koreas since the end
of official hostilities in 1953; because
they never signed a ceasefire, the DMZ is
an uninhabited buffer zone scrupulously
guarded by thousands of soldiers on both
sides. Not only does the turret never
sleep and not only can it see in the dark
(thanks to its thermal camera), once it's
pointing in the right direction, it can be
sure that any moving targets identified in
the area are enemies. Things become
more complicated when the machine is
placed in a location where friend and foe
could potentially mix, however.
Currently, the weapon has no way to
distinguish between the two.
Song sits at the wide table flanked by five
young engineers, most of whom were
educated at Ivy League colleges in
America, before returning to work in the
lucrative South Korean weapons
industry. "The next step for us to get to a
place where our software can discern
whether a target is friend, foe, civilian or
military," he explains. "Right now
humans must identify whether or not a
target is an adversary." Park and the
other engineers claim that they are close
to eliminating the need for this human
intervention. The Super aEgis II is
accomplished at finding potential targets
within an area. (An operator can even
specify a virtual perimeter, so only
moving elements within that area are
picked out by the gun.) Then, thanks to
its numerous cameras, Park says the
gun's software can discern whether or
not a potential target is wearing
explosives under their shirt. "Within a
decade I think we will be able to
computationally identify the type of
enemy based on their uniform," he says.
Once a weapon is able to tell friend from
foe, and to automatically fire upon the
latter, it's a short step to full automation.
And as soon as a weapon can decide who
and when to kill, Robocop-esque
science fiction becomes fact. The
German philosopher Thomas Metzinger
has argued that the prospect of
increasing the amount of suffering in the
world is so morally awful that we should
cease building artificially-intelligent
robots immediately. But the financial
rewards for companies who build these
machines are such that Metzinger's plea
is already obsolete. The robots are not
coming; they are already here. The
question now is, what do we teach them?
Complex rules
Philippa Foot's trolley dilemma, first
posited in 1967, is familiar to any ethics
student. She suggested the following
scenario: a runaway train car is
approaching a fork in the tracks. If it
continues undiverted, a work crew of five
will be struck and killed. If it steers down
the other track, a lone worker will be
killed. What do you, the operator, do?
This kind of ethical quandary will soon
have to be answered not by humans but
by our machines. The self-driving car
may have to decide whether or not to
crash into the car in front, potentially
injuring those occupants, or to swerve off
the road instead, placing its own
passengers in danger. (The development
of Google's cars has been partly
motivated by the designer Sebastian
Thrun's experience of losing someone
close to him in a car crash. It reportedly
led to his belief that there is a moral
imperative to build self-driving cars to
save lives.)
Likewise, a fully autonomous version of
the Predator drone may have to decide
whether or not to fire on a house whose
occupants include both enemy soldiers
and civilians. How do you, as a software
engineer, construct a set of rules for such
a device to follow in these scenarios? Is it
possible to programme a device to think
for itself? For many, the simplest
solution is to sidestep these questions by
simply requiring any automated machine
that puts human life in danger to allow a
human override. This is the reason that
landmines were banned by the Ottawa
treaty in 1997. They were, in the most
basic way imaginable, autonomous
weapons that would explode whoever
stepped on them.
In this context the provision of human
overrides make sense. It seems obvious,
for example, that pilots should have full
control over a plane's autopilot system.
But the 2015 Germanwings disaster,
when co-pilot Andreas Lubitz
deliberately crashed the plane into the
French Alps, killing all 150 passengers,
complicates the matter. Perhaps, in fact,
no pilot should be allowed to over-ride a
computer – at least, not if it means they
are able to fly a plane into a
mountainside?
We acquire an intuitive sense of
what's ethically acceptable by
watching how others behave and
react to situations – Colin Allen
"There are multiple approaches to trying
to develop ethical machines, and many
challenges," explains Gary Marcus,
cognitive scientist at NYU and CEO and
Founder of Geometric Intelligence. "We
could try to pre-program everything in
advance, but that's not trivial – how for
example do you program in a notion like
'fairness' or 'harm'?" There is another
dimension to the problem aside from
ambiguous definitions. For example, any
set of rules issued to an automated
soldier will surely be either too abstract
to be properly computable, or too
specific to cover all situations.
Some believe the answer, then, is to
mimic the way in which human beings
build an ethical framework and learn to
reflect on different moral rules, making
sense of which ones fit together. "We
acquire an intuitive sense of what's
ethically acceptable by watching how
others behave and react to situations,"
says Colin Allen, professor of cognitive
science and the philosophy of science at
Indiana University, and co-author of the
book Moral Machines: Teaching Robots
Right From Wrong. "In other words, we
learn what is and isn't acceptable,
ethically speaking, from others – with
the danger that we may learn bad
behaviours when presented with the
wrong role models. Either machines will
have to have similar learning capacities
or they will have to have very tightly
constrained spheres of action, remaining
bolted to the factory floor, so to speak."
At DoDAAM, Park has what appears to
be a sound compromise. "When we reach
the point at which we have a turret that
can make fully autonomous decisions by
itself, we will ensure that the AI adheres
to the relevant army's manual. We will
follow that description and incorporate
those rules of engagement into our
system."
'Frozen values'
For Allen, however, this could be a
flawed plan. "Google admits that one of
the hardest problems for their
programming is how an automated car
should behave at a four-way stop sign,"
he explains. "In this kind of scenario it's
a matter of being attuned to local norms,
rather than following the highway code –
which no humans follow strictly." Surely,
in the chaotic context of the battlefield, a
robot must be able to think for itself?
Likewise, there is a danger to "freezing"
our values, both military and civilian,
into hardware. "Imagine if the US
Founders had frozen their values to
permit slavery, the restricted rights of
women and so forth," says Marcus.
"Ultimately, we would probably like a
machine with a very sound basis to be
able to learn for itself, and maybe even
exceed our abilities to reason morally."
For Anders Sandberg, a senior
researcher at the Future of Humanity
Institute Oxford Martin School, the
potential rewards of offering machines
the ability to construct their own ethical
frameworks comes with considerable
risks. "A truly self-learning system could
learn different values and theories of
what appropriate actions to do, and if it
could reflect on itself it might become a
real moral agent in the philosophical
sense," he says. "The problem is that it
might learn seemingly crazy or alien
values even if it starts from common-
held human views."
The clock is ticking on these questions.
Companies such as DoDAAM continue to
break new ground in the field, even
before our species has adequate answers
to the issues their work presents. "We
should be investing now in trying to
figure out how to regulate software, how
to enforce those regulations, and how to
verify that software does what we want it
do," urges Marcus. "We should also
invest in figuring out how to implement
ethical reasoning into machines. None of
this easy; all of it is likely to became
critical in the decades ahead."
Serious research on machine ethics
and AI safety is an exceptionally
new field – Anders Sandberg
Allen believes there is still time. "We
have an opportunity to think through the
ramifications and possible solutions
while the technology is still in
development, which has not always been
the case," he says. "I would like to see
business-government-citizen panels
empowered to assess and monitor the
deployment of machines that are capable
of operating with little or no direct
human supervision in public spaces such
as roads and the airways. Such panels
should provide oversight in much the
same way that human subjects
committees monitor research involving
human subjects.
Comments