26°23'26.6"N 56°44'34.8"E

HAL 9000 made Artificial Intelligence tangible in the same way that it made AI distant. Kubrick’s rendition of Arthur C. Clarke’s benevolent-then-malevolent personified computer offered the full package of autonomous horror, a super-capable mind with an unblinking stare given total control over the mechanical processes needed to sustain human life. The fiction that dominated mid-century understandings of AI offered this Gestalt entity, synthetic life brought forth by human hands. AI in 2019 is not the beast of fiction. It is, instead, something far more subtle: a series of tools running inside machines that are so useful as to be taken for granted.

Consider, if you will, the way in which AI is hidden in the news from last week.

A robot, aided by an autonomous stabilization system (and possibly guidance systems), flew over the cruiser USS Boxer. On the deck of the Boxer was parked a vehicle with an electronic warfare package, the Light Marine Air Defense Integrated Systems (LMADIS). The sensors look for small-to-medium sized drones, scans them against a database, checks to see if they are friendly or hostile, and then, at human direction, jams the drones out of the sky. Depending on how AI is defined, AI is responsible for somewhere between all and none of that process.

I'm Kelsey D. Atherton , reporting from New Haven, Connecticut, and this is the third installment of Tomorrow Wars.

The deck-mounted LMADIS is a novel adaptation to a modern threat, and has been underway for some time. The systems that went into it are not particularly novel, nor, really, is the incident where it jammed a drone over the Strait of Hormuz. I mention it here because of the background way that machine reasoning works: as a new function of a tool, rather than the defining feature. We may not be living fully in an AI age, but we regularly encounter machines with varying degrees of autonomy.

AI today is ever-present, even if it’s not an unblinking orange eye reading lips and driven by the demands of plot. Understanding both the ubiquity and the limits of AI as it exists is essential to the modern moment. Onwards to the news!

° 1. WHAT’S NERF GOT TO DO WITH IT?

Counter-sniper missions seem an obvious use case for robots, but there’s no easy way to program a machine to hunt the people hunting people without, well, phrasing it like that. The Naval Surface Warfare Center, in its biannual Chief’s Challenge, asked teams to build autonomous counter-sniper robots, and then staged a competition where both the robots and their array of targets all used Nerf weapons. The adaptability of the soft, foam-ball-firing machines proved well-suited for an outdoor spectacle and light-hearted write-ups. It also outlines a valid path to the specific kind of lethal autonomy that activists warn about, international law is being formulated to regulate, and commanders may most want in combat.

° 2. WHERE WE’RE GOING, WE STILL NEED MAPS

One of the more everyday algorithms people encounter and trust is navigation tools, GPS-linked devices that trace paths and feed real-time data on everything from traffic to weather. It’s incredibly useful, but requires an always-on familiarity with both GPS and other data sources. Wayfinding from a fixed map and knowledge of the end location are useful skills for operating in denied environments, be they passive (from, say, mountains) or active (jammers deliberately messing with signals). In June, researchers at DeepMind (the Google-affliated AI house) trained navigation algorithms to plot through a maze of streets it had not seen before with just an aerial photo of the area serving as a map and knowledge of the end destination. Machines eventually built on this AI may well feel like the navigation tools we have now, without unusual disruptions in mountains. And in actively denied environments, it’s not hard to see the appeal of autonomous machines that can still navigate despite jamming.

° 3. AI QUESTIONS AT HUMAN SPEEDS

Tackling the National Security implications of Artificial Intelligence is such an important task that in 2018 Congress established a commission specifically to study exactly that. The commission’s initial report is still not available, though that is perhaps related to the government shutdown interfering with the initial publishing schedule. While we wait for the report, it’s worth looking at how holistic the scope of the commission’s mandate is, encompassing subjects from counterintelligence risks to economic impact to how the United States can prepare citizens for a future with more ever-present AI. These are worthy questions, worth careful thought and deliberation. They are also live questions, with intelligence services, commercial and government actors, and domestic and foreign groups all actively experimenting with AI in real-time.

That’s all for this fortnight. Questions, comments, or inquires about whether or not R2-D2 and C-3PO count as proper AI or something else entirely, email me at katherton@c4isrnet.com.

Want this newsletter in your inbox as soon as it's sent out? Subscribe here.

 

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In Tomorrow Wars