The Pentagon’s use of AI is speeding up the process of making difficult choices regarding lethal autonomous weapons.


NATIONAL HARBOR, Md. (AP) — Artificial intelligence employed by the U.S. military has piloted pint-sized surveillance drones in special operations forces’ missions and helped Ukraine in its war against Russia. It tracks soldiers’ fitness, predicts when Air Force planes need maintenance and helps keep tabs on rivals in space.

The Pentagon’s goal is to deploy a large number of affordable, disposable autonomous vehicles equipped with artificial intelligence by 2026 to compete with China. The ambitious project, called Replicator, aims to accelerate the adoption of small, intelligent, cost-effective military platforms, according to Deputy Secretary of Defense Kathleen Hicks in August.

Replicator’s funding and specifics are unclear, but it is anticipated to speed up the process of making difficult choices on which AI technology is sufficiently developed and reliable to be implemented, especially in regards to weaponized systems.

There is widespread agreement among scientists, industry experts, and Pentagon officials that the United States will soon possess fully autonomous lethal weapons. Despite assurances that humans will retain control, experts believe that advancements in data processing and machine-to-machine communication will ultimately limit people to supervisory positions.

This is particularly accurate if, as predicted, large quantities of deadly weapons are used in drone swarms. Numerous nations are developing these weapons, and China, Russia, Iran, India, and Pakistan have not signed a pledge led by the United States to utilize military AI ethically.

It is uncertain if the Pentagon is currently evaluating any autonomous lethal weapons systems for potential deployment, as mandated by a 2012 directive. A representative from the Pentagon declined to comment.

The use of AI technology in warfare has the potential to greatly impact Pentagon procurement and development, posing significant challenges for personnel. The Replicator brings attention to these obstacles.

According to Gregory Allen, a former high-ranking official in the Pentagon’s AI department who is now with the think tank Center for Strategic and International Studies, the Department of Defense is facing challenges in implementing the latest advancements in AI and machine learning.

The Pentagon has over 800 unclassified projects related to AI in its portfolio, many of which are still in the testing phase. These projects often utilize machine learning and neural networks to assist humans in gaining insights and increasing efficiency.

Missy Cummings, director of George Mason University’s robotics center and a former Navy fighter pilot, stated that the current AI technology utilized in the Department of Defense greatly assists and supports human efforts. She clarified that AI is not operating independently, but rather people are utilizing it to gain a clearer understanding of the complexities of war.

AI-enhanced technology is utilized in monitoring potential dangers in space, which has become the newest battleground for military rivalry.

According to Lisa Costa, the U.S. Space Force chief technology and innovation officer, China has plans to utilize AI technology, even on satellites, to determine who is considered an adversary. This was revealed during an online conference held this month.

The United States strives to stay on track.

The Space Force utilizes a functional model named Machina to independently monitor over 40,000 objects in space, coordinating numerous data collections every night through a worldwide telescope network.

Machina’s algorithms control telescope sensors. Through the use of computer vision and large language models, the algorithms are able to identify objects to track. At a conference in August, Col. Wallace ‘Rhet’ Turnbull of Space Systems Command explained that AI coordinates the drawing process using astrodynamics and physics datasets in real time.

According to him, there is another AI project being conducted at Space Force that examines radar information in order to identify potential enemy missile launches.

In other areas, the Air Force utilizes AI’s ability to predict and anticipate maintenance requirements for over 2,600 aircraft, such as B-1 bombers and Blackhawk helicopters, to keep them operational.

According to Tom Siebel, CEO of C3 AI, a Silicon Valley-based company with a contract, their machine-learning models can detect potential failures many hours in advance. C3’s technology is also used to predict the paths of missiles for the U.S. Missile Defense Agency and to identify internal threats within the federal workforce for the Defense Counterintelligence and Security Agency.

A pilot project is being conducted to monitor the physical health of the entire Third Infantry Division in the Army, which consists of over 13,000 soldiers. According to Maj. Matt Visser, the use of predictive modeling and AI has been effective in minimizing injuries and improving performance.

In Ukraine, artificial intelligence (AI) from the Pentagon and its NATO partners is utilized to counteract Russian aggression.

NATO allies share intelligence from data gathered by satellites, drones and humans, some aggregated with software from U.S. contractor Palantir. Some data comes from Maven, the Pentagon’s pathfinding AI project now mostly managed by the National Geospatial-Intelligence Agency, say officials including retired Air Force Gen. Jack Shanahan, the inaugural Pentagon AI director,

In 2017, Maven was created with the goal of analyzing drone footage from the Middle East, in response to the efforts of U.S. Special Operations against ISIS and al-Qaeda. It has since expanded to collect and interpret various types of data from sensors and human sources.

According to officials from the Pentagon, AI has assisted in the coordination of logistics for military aid from a coalition of 40 countries through the U.S.-created Security Assistance Group-Ukraine.

According to former Joint Chiefs chairman Gen. Mark Milley, in order to survive in modern warfare, military units must be compact, highly elusive, and maintain a rapid pace due to the rapid expansion of sensor networks that can provide global visibility at any given moment. He also noted that having such visibility also means having the ability to engage in combat.

The Pentagon is focusing on creating interconnected battle networks, known as Joint All-Domain Command and Control, to expedite communication among combatants. This system aims to automate the handling of various types of data, such as optical, infrared, and radar, across all branches of the military. However, implementing this system is a difficult task and is hindered by bureaucratic obstacles.

Christian Brose, who used to work for the Senate Armed Services Committee and now works for the defense technology company Anduril, is one of many supporters of military reform who think that they are making progress.

According to Brose’s 2020 book, “The Kill Chain,” the debate is shifting from whether this is the correct course of action to how can we effectively accomplish it within the tight timelines. Brose advocates for immediate adjustments in order to keep pace with China’s advancements in creating more intelligent and cost-effective interconnected weaponry.

The United States military is currently focused on “human-machine teaming” in order to enhance their capabilities. They have numerous unmanned air and sea vehicles monitoring Iranian actions, and both U.S. Marines and Special Forces utilize Anduril’s autonomous Ghost mini-helicopter, sensor towers, and counter-drone technology to safeguard American troops.

The progress in computer vision within the industry has been crucial. Shield AI has enabled drones to function without the need for GPS, communication, or human pilots. This is the main feature of their Nova quadcopter, which has been utilized by U.S. special operations teams in hostile regions for building reconnaissance.

In development: The Air Force is working on a “loyal wingman” initiative that will combine manned aircraft with autonomous ones. For example, an F-16 pilot could deploy drones to gather intelligence, divert enemy attacks, or carry out missions. The Air Force hopes to launch this program sometime in the next decade.

The timeline for the “loyal wingman” technology does not align with that of the Replicator, which some believe to be too ambitious. The Pentagon’s lack of clarity on the Replicator may be an intentional tactic to keep competitors guessing, but it could also suggest that they are still figuring out the desired features and objectives for the project, according to Paul Scharre, a specialist in military artificial intelligence and author of “Four Battlegrounds.”

Anduril and Shield AI, both supported by significant amounts of venture capital, are competing for contracts with other companies.

According to Nathan Michael, the head technology officer at Shield AI, their company plans to have a group of self-governing aircraft, consisting of at least three unmanned drones, operational within one year. The V-BAT aerial drone, which is currently utilized by the U.S. military for Navy vessels, counter-drug operations, and Marine Expeditionary Units, does not currently have artificial intelligence integrated into its design.

Michael stated that it will take a while before larger groups can be deployed with confidence. He also mentioned the importance of taking things step by step instead of rushing and risking failure.

Shanahan, the first AI leader at the Pentagon, only has confidence in defensive weapons systems that can operate independently, such as the Phalanx anti-missile systems on ships. His concern lies more with systems that do not function as promised or cause harm to innocent individuals or allies, rather than autonomous weapons making independent decisions.

Craig Martell, the current chief digital and AI officer of the department, is resolute in preventing that from occurring.

According to Martell, even if a system is autonomous, there will always be a person responsible for understanding its limitations, being well-trained in its use, and having justified confidence in when and where it can be deployed. This responsibility will never be absent.

Martell stated that it is difficult to determine when AI will be dependable for lethal autonomy in general. He gave the example of trusting his car’s adaptive cruise control but not the technology that is meant to prevent lane changes. Martell explained that, as the one in charge, he would only allow this technology to be used in very specific circumstances. He then suggested applying this concept to the military.

Martell’s team is currently assessing various potential uses for generative AI. They have a dedicated task force for this specific purpose, but their main focus is on experimenting and evaluating AI during its development stage.

According to Jane Pinelis, the chief AI engineer at the Applied Physics Lab at Johns Hopkins University and former head of AI assurance in Martell’s office, a pressing issue is attracting and keeping the necessary talent to assess AI technology. The Department of Defense is unable to match the salaries offered elsewhere. Individuals with a PhD in computer science and expertise in AI can make more money than even the highest-ranking military officials.

According to a recent report by the National Academy of Sciences, the Air Force’s standards for testing and evaluating AI are not yet fully developed.

Could this potentially lead to the United States deploying autonomous weapons under pressure that do not meet all requirements?

Pinelis stated that they are currently proceeding with the belief that there is enough time to thoroughly and conscientiously complete the task. However, if they end up not being fully prepared when the time comes to take action, someone will be required to make a choice.

Source: wral.com