To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.
Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.
“At one level, mission command really is a management of trust,” said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. “We’re continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?”
The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?
To the first point, Robert Brown, director of the Pentagon’s multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.
“A mission command is saying: you’re going to provide your subordinates the depth, the best data, you can get them and you’re going to need AI to get that quality data. But then that’s balanced with their own ground and then the art of what’s happening,” Brown said. “We have to be careful. You certainly can lose that speed and velocity of decision.”
Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.
“How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster?” said Tony Frazier, an executive at Maxar Technologies.
An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.
“One thing that’s remained consistent is folks will do one of three things with overwhelming information,” Brown said. “They will wait for perfect information. They’ll just wait wait, wait, they’ll never have perfect information and adversaries [will have] done 10 other things, by the way. Or they’ll be overwhelmed and disregard the information.”
The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.
Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.
“Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical,” said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.
When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.
“What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust?” said Kumashiro, referring to the fast pace of AI filtering. “When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.”
Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.
“If the data goes away, and you still got the mission, you’ve got to attend to it,” said Brown. “That’s a huge factor as well for practice. If you’re relying only on the data, you’ll fail miserably in degraded mode.”
Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.