WASHINGTON — A pair of artificial intelligence projects from U.S. Army researchers are easing communication barriers that limit the relationship between AI systems and soldiers.

The artificial intelligence projects are designed to support ongoing efforts for the Army’s next-generation combat vehicle modernization priority, which includes a focus on autonomous vehicles and AI-enabled platforms.

The first project, named the Joint Understanding and Dialogue Interface, or JUDI, is an AI system that can understand the intent of a soldier when that individual gives a robot verbal instructions. The second project, Transparent Multi-Modal Crew Interface Designs, is meant to give soldiers a better understanding of why AI systems make decisions.

“We’re attacking a similar problem from opposite ends,” said Brandon Perelman, a research psychologist at the Army Research Laboratory who worked on Transparent Multi-Modal Crew Interface Designs.

The JUDI system will improve soldiers’ situational awareness when working with robots because it will transform that relationship from a “heads down, hands full” to a “heads up, hands free” interaction, according to Matthew Marge, a computer scientist at the lab. Simply put, “this means that soldiers will be more aware of their surroundings,” he said.

Natural language AI systems are available on the commercial market, but the JUDI system requires a level of awareness in a physical environment that isn’t matched by commercial products. Commercial systems can understand what a person is saying and take instructions, but they don’t know what is going on in the surrounding area. For the Army, the autonomous system needs to know that.

“You want a robot to be able to process what you’re saying and ground it to the immediate physical context,” Marge said. “So that means the robot has to not only interpret the speech, but also have a good idea of where it is [in] the world, the mapping of its surroundings, and how it represents those surroundings in a way that can relate to what the soldier is saying.”

Researchers looked into how soldiers speak to robots, and how robots talk back. In prior research, Marge found that humans speak to technology in much simpler, direct language; but when talking to other people, they usually talk about a course of action and the steps involved. However, those studies were done in a safe environment, and not a stressful one similar to combat, during which a soldier’s language could be different. That’s an area where Marge knows the Army must perform more research.

“When a soldier is under pressure, we don’t want to have any limit on the range of words or phrases they have to memorize to speak to the robot,” Marge said. “So from the beginning, we are taking an approach of so-called natural language. We don’t want to impose any restrictions on what a soldier might say to a robot.”

double exposure image of virtual human 3dillustration on blue circuit board background represent artificial intelligence AI technology

JUDI’s ability to determine a soldier’s intent — or what Army researchers define as whatever the soldier wants JUDI to do — is based on an algorithm that tries to match the verbal instructions with existing data. The algorithm finds an instruction from its training data with the highest overlap and sends it to the robot as a command.

The JUDI system, Marge said, is scheduled for field testing in September. JUDI was developed with help from researchers at the University of Southern California’s Institute for Creative Technologies.

The Transparent Multi-Modal Crew Interface Designs is tackling the AI-human interaction from the other side.

“We’re looking at ways of improving the ability of AI to communicate information to the soldier to show the soldier what it’s thinking and what it’s doing so it’s more predictable and trustworthy,” Perelman said. “Because we know that … if soldiers don’t understand why the AI is doing something and it fails, they’re not going to trust it. And if they don’t trust it, they’re not going to use it.”

Mission planning is the one area where the Transparent Multi-Modal Crew Interface Designs may prove useful. Perelman compared the program to driving down the highway while a navigation app responds to changes along a route. A driver may want to stay on the highway for the sake of convenience — not having to steer through extra turns — even if it takes a few minutes longer.

“You can imagine a situation during mission planning, for example, where an AI proposes a number of courses of action that you could take, and if it’s not able to accurately communicate how it’s coming up with those decisions, then the soldier is really not going to be able to understand and accurately calculate the trade-offs that it’s taking into account,” Perelman said.

He added that through lab testing, the team improved soldiers’ ability to predict the AI’s future mobility actions by 60 percent and allowed the soldiers to decide between multiple courses of actions 40 percent quicker.

The program has transitioned over to the Army Combat Capabilities Development Command’s Ground Vehicle System Center’s Crew Optimization and Augmentation Technologies program. That’s where it will take part in Mission Enabler Technologies-Demonstrators phase 2 field testing.

Andrew Eversden covers all things defense technology for C4ISRNET. He previously reported on federal IT and cybersecurity for Federal Times and Fifth Domain, and worked as a congressional reporting fellow for the Texas Tribune. He was also a Washington intern for the Durango Herald. Andrew is a graduate of American University.

Share:
More In Artificial Intelligence