Posted on March 8, 2023

Feds Jeopardized Security of 1M Americans’ Online Accounts, Citing ‘Equity’

Luke Rosiak, Daily Wire, March 8, 2023

The federal government’s central technology arm jeopardized nearly one million online accounts by rejecting facial recognition technology when it was required for the high-security accounts, then cited “equity” to justify years of lying about its compliance with federal rules, The Daily Wire has learned.

The General Services Administration’s (GSA) technology group was tasked with creating, a service that federal agencies would use to create accounts permitting access to government websites detailing personal or sensitive information. The service was required to follow rules set by the National Institute for Standards and Technology (NIST), and included offering a hacker- and impersonator-resistant option for agencies dealing with the most sensitive data, which must conform to a NIST standard called Identity Assurance Level 2 (IAL2).

GSA earned $187 million off the service after telling a government funding board that its solution met NIST’s exacting standards, and $10 million more from agencies who purchased the highest-security solution from GSA on the basis of its representations.

But GSA knew that its system was anything but compliant with IAL2, because it disregarded one of its most important security features: Using biometrics such as facial recognition, eye scans, or fingerprints to prove those seeking access to sensitive data were who they claimed to be. Officials opted to simply ignore that category because they said facial recognition technology might discriminate based on skin color, the GSA Inspector General found in a new audit.

“Put simply, opted to ignore the standards and instead focused on selling to customers without regard to NIST requirements,” the IG wrote. The audit said GSA “misled their customer agencies” and “knowingly billed” them for a product they were not receiving.


The audit found that top officials ignored insiders who pointed out that a product whose sole aim was cybersecurity was not actually secure, and that once they were caught, they misled agencies into believing they were withdrawing the webcam security feature because of new policy on “equity.” In reality, it had been out of compliance the whole time, with GSA having tricked agencies into using insecure software for years—sending federal agency officials tasked with online security into a tailspin when they learned the truth.

“As of May 2022, had 906,187 users of services that GSA purported to be IAL2 but did not comply,” the IG said. “Notwithstanding GSA officials’ assertions that met [the] requirements, has never included a physical or biometric comparison in production. officials informed us that biometric comparison was not included in products offered to customer agencies, initially because the feature required testing before implementation and later because they further delayed it due to equity concerns.”

At multiple points, senior leaders with GSA’s Technology Transformation Services (TTS), the division under the Federal Acquisition Service (FAS) in charge of the project, “learned that did not comply with IAL2 requirements. They did not, however, notify customer agencies of the noncompliance. The inability to meet IAL2 NIST standards became the topic of discussions among leaders and personnel at least as early as 2019, and included concerns that using individuals’ selfies to verify their identity could impact’s rejection rates based on physical traits, such as skin color and tone,” it said.


The jig was up in January 2022, when a federal agency asked point-blank how the login system could possibly be compliant when it didn’t use webcams, fingerprints, or eye scanners. On January 20, 2022, the GSA released an “Equity Action Plan” that it said was required by the Biden administration, and days later, the GSA relied on the new policy to say that it would not meet NIST standards with


Agencies who relied on secure logins from GSA told the IG that “’s noncompliance with the IAL2 standard created a greater risk of fraud for the customer agency,” “had an impact on the credibility of their program,” and could create liability because “the customer agency would be held responsible for allowing access to individuals at the wrong level.”