Humans are remarkably good at choosing to act on limited information. Computers, less so. A new DARPA program wants to train artificial intelligence to process and evaluate information like humans do, and produce actionable results on far smaller datasets than presently done. It’s a program of such important DARPA’s giving it VIP status, or a least VIP as an acronym: Virtual Intelligence Processing.

“Successful integration of next-generation AI into DoD applications must be able to deal with incomplete, sparse and noisy data, as well as unexpected circumstances that might arise while solving real world problems,” reads a solicitation posted June 14. “Thus, there is need for new mathematical models for computing leading to AI algorithms that are efficient and robust, can learn new concepts with very few examples, and can guide the future development of novel hardware to support them.”

To create these mathematical models, DARPA wants partners to look inward, creating AI inspired by the robust and massive parallelism seen in the human neocortex. If it is the architecture of the brain that makes humans so especially skilled at processing information quickly, then it is an architecture worth studying.

“In order to reverse engineer the human brain,” the solicitation continues, calmly, “we need to apply new mathematical models for computing that are complete and transparent and can inform next-generation processors that are better suited for third-wave AI.”

It is DARPA’s nature to inject funding into problem areas it sees as both yielding future results and not presently served by the market, and this is not different. The solicitation explicitly asks for mathematical models that have not already been the focus of AI development. It’s also looking for models that can inform the development of future hardware, rather than programs that can run on existing machines. DARPA is interested in how the hardware works in simulation, but wants partners to hold off on actually making the hardware for the model.

So, the plan goes: create a mathematical model, inspired by brains, to process information on a small and limited data set, and then design it for hardware that doesn’t exist yet. Easy as that sounds, the solicitation also asks proposers to talk about the limitations of the algorithms when applied to military tasks, and specifically limitations related to accuracy, data, computing power and robustness.

Working from limited information is an expected future of military machines going forward. Between electronic warfare, denied environments and the very nature of battlefield events as rare and hard to record moments, doing more with on-board processing of limited data should enable greater autonomy. Even in the rare case where a weapon system transmits data back for algorithm refinement, that data set will be orders of magnitude smaller than the big data sets used to train most commercial machine learning tools.

Should a proposer’s idea be accepted and they follow through both Phase 1 and Phase 2 of the project, the total award is set at $1 million. A tidy sum, for anyone who can figure out the math to make a future computer run on sparse information as effectively as a human brain.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Artificial Intelligence