Department of Defense Plans to Focus Early AI on “Low Consequence” Applications

Agency: 
Dept. of Defense
Navy Officer Welds a Watertight Door for Shipboard

Navy Petty Officer 3rd Class Robert Salatino, assigned to the Mid-Atlantic Regional Maintenance Center, welds a watertight door for shipboard use at Naval Station Norfolk, Va., Oct. 17, 2019. (Photo credit: Derry Todd, U.S. Navy)

The Department of Defense (DoD) has a long way to go in developing artificial intelligence (AI) and applying it to the most pressing military problems. For now, the DoD is applying AI toward humanitarian assistance and predictive maintenance, according to the director of the Joint Artificial Intelligence Center.

''We start with low-consequence use cases for a reason,'' said Air Force Lt. Gen. John Shanahan during a panel discussion last week at the U.S. Naval Academy in Annapolis, Maryland. Because they are ''narrow'' applications, he explained, it's easier to assess results.

Shanahan said AI hasn't yet achieved the readiness level to apply toward more complex issues such as nuclear command and control or missile defense, which carry a much higher risk if they don’t work as expected.

''I think that’s not where any of us are interested in heading right now,'' he said.

One measure the department is willing to apply now is the perceived risk versus the potential reward for using AI in a particular application, and reward outweighing the risk is something Shanahan said he's not seeing now.

''I can't show the rewards right now on mission-critical systems,'' he said. ''On decision support, every single combatant command wants help on decision support systems: 'How can I do an operational plan in two weeks instead of two years?' That’s very, very challenging ... to take on.''

The reward is great for solving a problem like decision support, especially in terms of saving time, but only if an AI system can get it right—and that's just not happening yet, according to Shanahan.

''Nobody has proven that those rewards justify the risks we’re going to take right now,'' he said. ''Everything that we do in the business I am in is about risk. Who incurs the risk? What's the risk to mission? What's the risk to force? Is it a risk worth accepting? What I am having a hard time getting through right now is [that] I am not seeing the rewards outweigh the risk in those mission-critical cases.''

Still, Shanahan is confident AI is going to be a big part of the DoD’s future. ''There is no part of the Department of Defense that cannot benefit from AI,'' he said.

Problems beyond risk exist as well, including overcoming hurdles in military culture, talent and data. Military culture requires long-term planning for the development of new systems, he explained, and a new aircraft might take decades to deliver.

''There are a lot of people that want to go forward very quickly with AI capabilities in the department, but we live by five-year budget cycles and weapons system milestones that are measured in 5-to 10-year increments, as opposed to how quickly can I take an algorithm, update it and put it back into the field,'' Shanahan said. ''We have a long way to go to really embrace the speed and the scale of what's happening in commercial industry.''

The DOD is making progress in learning to do acquisition and contracting more quickly. He cited as examples the Defense Digital Service, which hires top experts from industry and academia for short tours to overcome defense challenges, and the Defense Innovation Unit, which provides funding to private-sector companies to solve defense-related problems.

To view the original news release on the DoD website, visit https://www.defense.gov/explore/story/Article/2000243/dod-focuses-early-ai-use-on-low-consequence-applications/.

Category: 
Member Labs