Oral History and Military Publishing

Use of ChatGPT in the Military: A Gamechanger?

PostCredits: Reuters

OpenAI has silently removed language explicitly prohibiting the use of its technology for military purposes from its usage policy, which governs the use of powerful tools like ChatGPT.

Prior to January 10, the policy explicitly banned "weapons development" and "military and warfare" activities. However, the updated policy retains a general prohibition on using the service to cause harm to oneself or others, with the example of "develop or use weapons" included. The removal of the specific ban on "military and warfare" use raises concerns, especially given the increasing integration of AI technologies in military applications, including intelligence, targeting systems, and autonomous military vehicles.

AI has already been deployed by the American military in conflicts such as the Russian-Ukrainian war, and AI-powered systems like "The Gospel" have been used by Israeli forces for target identification, supposedly aiming to "reduce human casualties" in attacks. Activists and AI watchdogs have consistently expressed apprehensions about the incorporation of AI in military contexts, citing potential biases in AI systems and the risk of escalation in arms conflicts.

The policy changes coincide with a global trend where militaries worldwide are keen to integrate machine learning techniques to gain strategic advantages. The Pentagon, in particular, is cautiously exploring the potential applications of tools like ChatGPT and other large-language models (LLMs). These software tools exhibit the capability to generate intricate text outputs rapidly and skillfully. LLMs undergo training on extensive datasets comprising books, articles, and web content to mimic human-like responses to user prompts. While the outputs of LLMs, such as ChatGPT, are often highly convincing in terms of coherence, their optimization for this aspect rather than a precise understanding of reality can lead to what is known as "hallucinations," posing challenges in terms of accuracy and factual reliability. Despite these limitations, the swift text processing and analysis output of LLMs makes them a natural fit for the information-intensive requirements of the Defense Department.

In an interview with Wired magazine last year, former Google CEO Eric Schmidt compared artificial intelligence systems to the advent of nuclear weapons before the Second World War. Schmidt said, “Every once in a while, a new weapon, a new technology comes along that changes things... Einstein wrote a letter to Roosevelt in the 1930s saying that there is this new technology—nuclear weapons—that could change war, which it clearly did. I would argue that [AI-powered] autonomy and decentralized, distributed systems are that powerful."