Photo courtesy of the Office of the Secretary of the Army.

This past week, Carnegie Mellon University expanded its long-standing collaboration with the U.S. Department of Defense with the launch of the United States Army’s Artificial Intelligence Task Force. It will be based out of the National Robotics Engineering Center (NREC) in Lawrenceville.

With the NREC as its base, the Task Force will be consist of several small teams of military personnel that will develop prototypes and do long-term research under the direction of the Army Futures Command.

The location was chosen to allow the Army to collaborate closely with Carnegie Mellon, as well as with other universities and companies working on AI in the Pittsburgh region.

Speaking to the assembled crowd, General John Murray, commander of the Army Futures Command, said the partnership would have far-reaching effects on the management of the military.

“Artificial intelligence is going to be incorporated in everything we do,” said Murray. “It is not a question of if these technologies will change the character of war, it is only a question of when.”

While the general meant it as a positive, for some observers both in Pittsburgh and around the country, his words do bring up concerns.

The broader mission and even the budget of the AI Task Force has yet to be decided and much of the work will entail applying AI to more basic non-combat questions such as communications engineering. But it’s likely that the Task Force will also be at the forefront of applying AI to weapons systems.

One goal of applying AI to military weapons is to create autonomous tech that can function on battlefields without requiring the presence of soldiers on the ground, thus protecting military personnel from risk.

In a press conference with local media after their speeches, CMU president Farnam Jahanian, Army Secretary Mark Esper and Murray declined to endorse a full ban on autonomous weapons systems. But Secretary Esper explained that part of the work of the Task Force will be exploring the ethics and codes of conduct for AI systems with partners at CMU and other universities.

Carnegie Mellon’s relationship with the Pentagon goes back more than 70 years, with billions in military funding supporting a wide variety of research in healthcare, engineering and cybersecurity at the university.

While controversies over defense contracts are nothing new for CMU and other elite research institutions, a growing chorus of technologists says the trend of artificial intelligence being integrated into modern weapons systems is a new and uniquely troubling issue that demands attention and regulation.

“Unless we put some rules in place here, things are going to get out of hand,” said Mary Wareham, advocacy director with the Arms Division of Human Rights WatchWe need to prevent the development of weapons systems that lack significant human control.”

According to Wareham, rather than making troops and citizens safer, removing human oversight and allowing machines to make decisions on when and how to apply force invites tragic mistakes and confuses notions of accountability that are key for any credible military.

In addition to uses abroad, these technologies could also be applied to policing of civilians and border patrol activities at home.

Along with her Human Rights Watch work, Wareham also serves as a spokesperson for an international coalition pushing for a treaty to ban autonomous weapons similar to existing agreements on landmines and chemical weapons. The European Parliament announced their support for such a treaty in September, and at the Paris Peace Forum a few weeks later UN Secretary-General António Guterres also joined the push.

At the annual meeting of the United Nation’s Convention on Certain Conventional Weapons in November of 2018 in Geneva, a majority of member states proposed beginning deliberations on a treaty to ban autonomous weapons. However, the United States joined Australia, Israel, Russia and South Korea in blocking the treaty.

At the CMU event, Esper questioned the efficacy of an international treaty banning autonomous weapons:  “There’s lot’s of countries that would do this stuff regardless of what the international community says,” Esper said.

CMU’s president Jahanian pointed out at the Task Force unveiling that these ethical discussions will be priorities.

“One of the important benefits of having this task force be based here,” Jahanian said, “is that it’s going to give us the ability to have discussions about AI and other emerging technologies, and ethical applications of these technologies both in a military context as well as a civilian context.”

Academics around the world have raised the issue of whether AI researchers should assist with military projects. In April of 2018, The Korea Advanced Institute of Science and Technology in Seoul announced that it would no longer conduct research into autonomous weapons following a boycott from 50 researchers affiliated with the school.

In June, Google announced that it would not renew its contract to assist the military in using AI to interpret video images, after 4,000 employees signed a petition calling for “a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

In an email to NEXTpittsburgh, Georgia Crowther, a graduate student at CMU’s Robotics Institute, said she hoped to see similar activism among her peers in Oakland.

“There is no shortage of engineering and robotics projects that serve humanitarian objectives, and NREC is home to a number of such projects,” said Crowther. “Roboticists have the imperative to make a moral choice when accepting jobs or research projects and to view ourselves as conscientious objectors when it comes to giving our labor to military efforts.”

Bill O'Toole was a full-time reporter for NEXTpittsburgh until October, 2019. He previously reported in Myanmar.