Darpa Spy Cams to Find Threats in ‘Weak Evidence’

The military’s got spy drones and surveillance cameras all over Afghanistan, and they’re looking to add even more. But the heaps of footage are already more than analysts can handle. Now, the Pentagon’s launching a a new effort that will use computer programming to help human analysts and improve the speed and accuracy of spy-cam threat detection — even when there’s only “weak evidence” of an impending attack.

Darpa, the military’s far-out research arm, is looking for proposals for a software program that will zero in on useful intelligence collected by all those aerial cameras. And they want the system to work in two ways: as a forensic tool to scan older footage and help human analysts trace the onset of specific events, and as a real-time analyst itself, using algorithms to detect potential threats. Darpa’s calling it the “Persistent Stare Exploitation and Analysis System” — “PerSEAS” (ugh) for short.

The military’s already got surveillance systems that can work in real-time and scan footage for unusual activity. Angel Fire, a program launched by the Air Force, works like an airborne security camera and transmits footage to military analysts. And Constant Hawk, which can analyze still images for potential threats, has been successful in curbing convoy attacks. But these wide area motion imagery (WAMI) cameras can generate terabytes of data from a single mission. It can takes hours or days or weeks for military eyes to review it all. “The tedious nature of current exploitation capabilities limits the ability to fully utilize the available data.  Consequently, critical battlefield questions go unanswered and timely threat cues are missed,” Darpa notes.

The agency is hoping its system would be able to work faster, and connect the dots between specific combinations of activity, objects and events that often precede threats — erratic driving before a roadside suicide attack, say.

That data is already available from post-event analysis, but the software would also  learn as it went, modifying the algorithms that distinguish potentially hostile activities to accommodate “the transient nature of insurgent tactics.” And Darpa wants a system that can adapt to different circumstances:

This technology should be able to build normalcy models based on observations over several days or hours, as appropriate, and then discover and identify anomalies as seen in subsequent WAMI sequences.

So PerSEAS should be quicker and more adaptable than human analysts. But it should also be more intuitive: Darpa wants the program to detect threats that stem from “a set or sequence of normal activities” or “weak evidence,” giving troops plenty of time to intervene. The program sounds fool-proof, but Darpa’s not quite ready to entrust computers with wartime decision-making: they’re asking for a system that’ll let grunts double-check any PerSEAS red alert:

How will the user know that those two vehicles highlighted were the probable attack vehicles in the suicide vehicle bombing? How will the user understand the significance of a highlighted facility which may represent a new meeting location for a terrorist group? Users should have full access to the hierarchy of information being exploited, thus allowing the analyst to drill down through the data to better understand the results and the implications.