Senate investigators found that federal agencies do little or nothing to stop crimes and abuses committed in the systems they use to collect public comments on proposed regulations.
A new bipartisan report, issued Thursday, said the agencies haven’t acted in even the most egregious cases, where real people’s identities, including dead people’s, are stolen and then submitted with comments they never wrote.Findings of Fact:
The Senate Permanent Subcommittee on Investigations report comes after The Wall Street Journal in 2017 exposed thousands of other fraudulent comments on regulatory dockets at federal agencies, some using what appear to be stolen identities posted by computers programmed to pile comments onto the dockets.
Most federal agencies lack appropriate processes to address allegations that people have submitted comments under fraudulent identities. Recent reports demonstrate that individuals are using false identities to submit comments. Agencies, however, lack both the ability to determine if people submit comments under valid identities and appropriate processes to address allegations that fraud or identity theft has occurred. Only one agency contacted by the Subcommittee—the CFTC—said that it had referred suspicious activity to the Federal Bureau of Investigation (“FBI”). Other agencies, including the CFPB, the Department of Labor, and the FCC, all were aware of comments submitted under false identities regarding their rules, but took little action to address them.
The FCC’s process for addressing comments submitted under false identities potentially causes additional harm to victims of identity theft and the comment process as a whole. The only remedy the FCC provides to people who allege that their identities have been used to post a comment they did not authorize is for the identity theft victims to post a separate comment to establish their own position on an issue. This adds even more comments to often lengthy dockets, making them less useful to the public and to FCC staff. It also requires the victims to engage in a regulatory process in which they potentially have no interest in engaging.
None of the commenting systems use CAPTCHA or other technology to ensure that real people, instead of bots, are submitting comments to rulemaking dockets. This leaves thecommenting process more vulnerable to abuse by malicious actors.