In an unprecedented move in United States history, the Space Force, a branch of the American Armed Forces dedicated to space operations, has publicly announced a temporary prohibition on the use of AI tools like ChatGPT due to concerns regarding information security. This prohibition will remain in effect until the Army’s Chief Technology and Innovation Office issues a memorandum confirming the safety of generative AI tools.
Effective September 29th, 2023, the Space Force personnel, often referred to as “Guardians,” have officially ceased using generative AI tools. This decision was formalized through a memo issued by the U.S. Armed Forces, explicitly identifying web-based AI tools as potential sources of data breaches and security vulnerabilities. This means that the approximately 8,000 military professionals within this federal agency will no longer employ large language models for research or development purposes.
Lisa Costa, the Chief Technology Officer of the Space Force, confirmed this development. She emphasized that this prohibition has been put in place with the betterment of society in mind and will be lifted once the agency’s researchers determine that these tools are safe and free from security risks.
It’s worth noting that this ban marks a unique occurrence in the context of software restrictions in the Western world. While companies have previously imposed bans on various software and applications, it’s rare to see such a prohibition on American-developed software. ChatGPT, for instance, is a product of OpenAI, now a subsidiary of Microsoft.
Google has also ventured into the realm of AI with its creation, BARD. However, BARD, while meeting user expectations to a certain extent, is plagued by a number of information security concerns.
As of now, other branches of the Pentagon have not expressed similar concerns about the use of AI tools. They believe that responsible and tactical utilization of this technology can yield substantial benefits without compromising security.
Ad