If Amphetamine-Popping Losers in Nevada Weren’t Bad Enough Pentagon Works to Unleash Artificial Intelligence Drones

3rd world peasants can now look to the future where they are hunted down by killer robots

You think the Empire’s drone wars are bad now?

Suppose you knew that I was manufacturing gun powder. And rifle barrels, trigger mechanisms, and sighting equipment. And I was open about this, not trying to hide my activities and projects from anyone. Indeed, I am absolutely unapologetic about my activities. Suppose that I claimed that I was not planning on making rifles and ammunition, just components, and even if I was, these rifles wouldn’t be used to shoot anything, and certainly not to kill anything or anyone.

To be clear, I’d reinforce, that even if fully functioning rifles and ammunition were in fact assembled from these parts that I was making, the rifles would just sit in a case and I would never allow them to be used. I’d make some comments about how rifles and ammunition weren’t really very useful when assembled anyway – that wasn’t the goal – and that the component parts were really more valuable, and also innocuous.

To be sure that you understood, I would assert that my rifle and ammunition components would always have to be designed so that a human would intervene if anyone tried to assemble the parts or use the assembled rifles. But I would also want to make sure that you knew that other people who weren’t as ethical and honest as I say I am were working very hard on rifles and ammunitions.

Would you believe that what I am doing and what I intend won’t lead to the existence of rifles to be used to shoot anything or anyone at some point?

The mental mush above is similar to the Pentagon’s claims that it is aggressively pursuing autonomous capability but is not building autonomous weapons systems. But that they are indeed working towards killer robots should not be a surprise because autonomous, artificial intelligence (AI)-directed weapons systems are the obvious, inevitable culmination of over 50 years of concerted effort to create and refine the technological building blocks and to evolve the operational concepts.

The Pentagon has been working on machine aids to and machine replacement for human decision-making for decades. During the Cold War, the Pentagon first developed integrated, radar-based moving target indication systems for air and ballistic missile defense systems and this work evolved to airborne detection of moving ground vehicles. Advanced torpedoes, the Phoenix missile carried by the F-14 Tomcat, and the Longbow version of the Hellfire missile were designed to be fired at targets beyond visual range by associating detected target signals with target libraries. The anti-ballistic missile Launch on Warning system and air defense systems such as the Navy’s Close-in Weapon System, Aegis missile system, or the Israeli Iron Dome were designed to be semi or fully autonomous in order to overcome or supplement the timeframes needed for human decisions.

The military also wanted to speed up its routine airstrikes. Following operational frustrations and failures to eliminate moving ground vehicles like mobile missile launchers during the 1990s in operations such as Desert Storm and Allied Force, senior military leaders – notably from the Air Force – championed technological acceleration that would compress the targeting-killing cycle to “single-digit minutes.”

The military’s term for finding a target and shooting it is called the “kill chain.” A common formulation of the steps in this chain is Find-Fix-Track-Target-Engage-Assess. To date, and only comparatively recently, the most mature part of this kill chain is the “kill” part – “Engage” – following decades of research and development in guidance technology.

The Pentagon and press portrayed Desert Storm as a high-tech festival of accurately-delivered munitions, but the vast majority of air-delivered weapons were unguided, just as they had been in World Wars I and II, Korea, and Vietnam. In fact, most of the “precision” weapons used in Desert Storm were Vietnam vintage. The 1960s-era semi-active laser technology and the 1990s-era GPS-aid to inertial guidance systems are now common on the Pentagon’s most popular weapons such as the Hellfire missile and Joint Direct Attack Munition (JDAM) guidance kit for standard aerial warheads.

Today the U.S. and most modern militaries rarely use unguided bombs and over the past several years the Pentagon has dropped so many bombs that it has shortages in its inventory.

But for all of the improvements in weapons guidance, the most difficult parts of the kill chain come before and after “Engage.” These other parts of the kill chain are both the most important and where the military has the least capability. The hardest parts of the kill chain are where the military attempts to discover, identify, and locate targets and then to assess the damage after the airstrike, often using equipment such as optical magnification or radar-based remote sensing mounted on manned and unmanned aircraft.

Technical challenges abound. The moisture content, particulate obscuration, and temperature gradients in the atmosphere complicate optical and radar sensors. Visible, infrared, laser, and radar wavelengths are limited in various ways by physics. And the design of sensors must make engineering tradeoffs between the power supplied by the aircraft, the computational processing of signal returns, and the programming logic that defines what level of signal processing will flip each video screen pixel from white to gray to black or to which color in the display.

Further, all remote sensing systems regardless of the technology always include a three dimensional degree of location uncertainty termed target location error. Add in a time delay between when the target detection sensing system records the location estimate and when this information is received, and you add a fourth dimension of target location uncertainty.

Of course, these technical issues are the comparatively easy part. The Pentagon knows that it will not always be able to bomb with impunity as they have in recent years. Someday, somewhere the U.S. military will again run into a capable surface-to-air threat and this will lengthen and expand the kill chain geometrically and temporally, pushing aircraft and their sensors and weapons further away from targets, exacerbating the angular measurement challenges of target detection and identification.

For example, a single pixel on a sensor’s video display that might equal a handful of inches from a few miles will equal dozens of feet from dozens of miles. In other words, there is always some distance between a target and a sensor where one radar wavelength or one video pixel will be larger than the target and so the pilot, drone operator, or drone AI will not be able to see let alone identify its target.

The Pentagon is working on these problems. A quick search of the military’s recent requests for research through the Small Business Innovative Research (SBIR) program shows that the building blocks for autonomous weapons systems are in work. 

One request aims to automate the simultaneous tracking of multiple moving ground targets from multiple sensors on multiple unmanned aircraft.

Another seeks to “support strategic reasoning” via computer synthesis and optimization of multiple intelligence information streams for use by commanders and analysts in the air operations center.

Another expresses the desire to convert “sensor data into actionable information” by integrating multiple sources of information to enhance military vehicle sensor feeds.

Still another wants to find a way to reduce the “cognitive burden of human-robot teaming” for circumstances where a single human is teamed with multiple robotic partners.

The Army is currently developing robotic “virtual crewman” to help soldiers through the complexity and chaos of war. The program director the Army’s ATLAS system asserts that this is just an aid and not replacement for human decision-making. “The algorithm isn’t really making the judgment about whether something is hostile or not hostile. It’s simply alerting the soldier, [and] they have to use their training and their understanding to make that final determination.”

But let’s think through how such a system has to work in order to be useful to the soldier. The target recognition system does not add any value if it alerts the soldier to every tree, dog, building, or civilian within view. Rather, the computer algorithms can only add value to human decision-making if it synthesizes information streams, filters what is unnecessary from the environment, and optimizes things that are likely targets.

By definition, therefore, a useful “virtual crewman” will have to much more often than not provide what it has been programmed and trained to see as targets to its human teammates. While we do not know how the AI for the ATLAS system is trained is unknown, we do know that history is replete with target identification errors and that biases can be built into or trained into AI.

When machines optimize information, the human soldiers will have a limited set of information to sort through and with which to make choices. So, the assertion that humans will be making the “decision” to fire is fraught and misleading. The information context and decision set will have been winnowed down to one of pulling the trigger or not. This will nearly be the analogy of a rifleman in a firing squad with perhaps just a bit more human judgment in the mix.

We can readily foresee the calls for full trigger-pull autonomy following the inevitable uproar when ATLAS (or other such systems) fails in some cases to alert soldiers to threats or when the communications link between an overwatching drone and a human operator on the other side of the world fails at a critical time resulting in American deaths. Technology enthusiasts will assert that “it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans” and, ceteris paribus, autonomous target engagement becomes thus necessary to protect American lives.

Such enthusiasts state that ethical rules of engagement will be programmed into autonomous systems. But how do we square this rosy assertion with the human decision-making logic that acceptable targets for killing need merely be estimated to be a military age male holding something that looks like a weapon, or that their behavior over time creates a certain “signature” of nefariousness, or just being proximate to a high value target? Will the autonomous weapon logic for vetting targets be stricter? This is a fantasy.

Advocates of increasing autonomy point to the Pentagon’s AI Strategy and to Department of Defense Directive (DoDD) 3000.9 as evidence of caution pointing to statements about the necessity of system design for human interface and oversight.

However, simply because an autonomous system affords human intervention via design does not mean that the functioning of a lethal autonomous system requires human involvement. This is the difference between “control by design” and “control in use.” Indeed, US leadership is actively blocking attempts at international regulation or the formal stigmatization of the development and use of autonomous killer robots, undermining assertions that such capability is not a US goal.

AI in all its potential forms is the perfect national security bogeyman. In the near future, virtually any personal or national inconvenience or anomaly may be blamed on malevolent AI. The Russians are working on it after all and the Pentagon is worried about technology gaps, both real and imagined.

Indeed, if Russian social media trolls are capable inspiring chaos at a fraction of the cost of operating Radio Free Europe – let alone regime toppling – one readily imagines how statements by Russian generals that “Broad employment of precision and other types of new weapons, including robotic ones, will be fundamental characteristics of future conflicts” are assured to keep US military and intelligence budgets growing into the future.

Dave Foster is a data analyst in the private sector. From 1988-2018 he was a Marine Corps pilot and a Defense Department contract and civilian weapons engineer and operations analyst. He has degrees in engineering, business, and history.

Source: Antiwar.com

1 Comment
  1. CHUCKMAN says

    “If Amphetamine-Popping Losers in Nevada Weren’t Bad Enough Pentagon Works to Unleash Artificial Intelligence Drones

    “3rd world peasants can now look to the future where they are hunted down by killer robots”

    Just love the headline and lede.

Leave A Reply

Your email address will not be published.

Anti-Empire