A Military Chatbot Meets Its Match: How NIPRGPT Got Sidelined Over Security Breaches | by Jeremy McGowan | Apr, 2025 | Medium


The Department of the Air Force's experimental chatbot, NIPRGPT, was sidelined due to security breaches after a promising start as a secure, military-tailored AI assistant.
AI Summary available — skim the key points instantly. Show AI Generated Summary
Show AI Generated Summary

A Military Chatbot Meets Its Match: How NIPRGPT Got Sidelined Over Security Breaches

An Ambitious AI Experiment on DoD Networks

In mid-2024, the Department of the Air Force (DAF) launched an experimental chatbot called NIPRGPT — short for Non-classified Internet Protocol Router Generative Pre-trained Transformer — to bring ChatGPT-like capabilities onto U.S. military unclassified networks. Developed by the Air Force Research Laboratory (AFRL) under a project codenamed “Dark Saber,” NIPRGPT was touted as the first enterprise large language model (LLM) pilot on the Pentagon’s Non-classified Internet Protocol Router Network, known as NIPRNet, which is the DoD’s sensitive but unclassified network accessible to most service members. The AFRL Information Directorate in Rome, NY — a hub for Air Force software innovation — built NIPRGPT as part of its Dark Saber platform to rapidly deploy next-generation software to the force.

What exactly was NIPRGPT? In essence, it was a secure, military-tailored chatbot akin to OpenAI’s ChatGPT, but hosted on government systems. It allowed Airmen, Guardians (Space Force personnel), and even DoD civilians and contractors with Common Access Cards (CAC) to log in and query AI models for help with everyday tasks. Users could ask it to draft emails or reports, summarize documents, or even assist with software…

Was this article displayed correctly? Not happy with what you see?


Share this article with your
friends and colleagues.

Facebook



Share this article with your
friends and colleagues.

Facebook