What is the FCC going to do about millions of bot comments during the net neutrality repeal debate?
Data scientist Jeff Kao was skeptical of the nearly 23 million public comments the Federal Communications Commission received during the net neutrality repeal debate last year.
“I would not guess one in 15 people submitted an FCC comment,” says Kao, a machine learning engineer at Atrium Legal Technology Services in San Francisco. “The amount doesn’t pass the smell test.”
He challenged his suspicion by analyzing the publicly available comments and found at least 1.3 million were submitted under a stolen or misused identity. While there were fake comments submitted on behalf of both sides of the debate, the vast majority, Kao says, were anti-net neutrality. However, the problem was bigger than he initially knew.
The New York attorney general’s office, which is suing the commission over net neutrality along with 22 other attorneys general, said as many as 2 million Americans had their identities stolen. A 2017 report by data analytics consulting firm Emprata for industry group Broadband for America found that nearly 445,000 were from Russian email addresses, and that a similar number came from Germany.
While the FCC has taken issue with the characterization of the New York attorney general’s claims, Jessica Rosenworcel, an FCC commissioner and a Democrat, released a statement saying the agency’s net neutrality repeal process “turned a blind eye to all kinds of corruption in our public record—from Russian intervention to fake comments to stolen identities in our files.”
Even with these allegations, the FCC has not changed its comments process, which Kao says is “concerning because we all start to lose a little bit of faith in our democracy.”
While heralded for improving government access, moving government online has created new vulnerabilities to America’s democratic processes. With known vulnerabilities, governments, advocates and software companies are sounding alarms and promoting solutions. This is a problem not only for the FCC.
Platform to speak
Rosenworcel, who is in the minority on the Republican-controlled commission, tells the ABA Journal that organizations receiving fake comments or comments submitted under stolen identities include the Consumer Financial Protection Bureau, Department of Labor, Federal Energy Regulatory Commission, and Securities and Exchange Commission.
Renee DiResta, head of policy at Data for Democracy, an organization that uses data for social good, says Russia’s misinformation campaign during the 2016 election and the spam FCC comments are examples of “manufactured consensus.”
While this approach to disinformation is most common on social media networks, she says it can be used to sway public policy as well. Rosenworcel adds: “There are eerie parallels between what we saw in the net neutrality public record and the reported interference in the 2016 election. We should pay attention to them. We should figure out what’s going on.”
In the case of the FCC, the public comment portal is “woefully deficient,” Rosenworcel says. To limit the impact of bots and spam, she recommends using Captchas or requiring two-factor authentication before submitting a comment. However, she says the current FCC budget does not include resources to make these changes. The FCC did not respond to a request for comment. (In July, FCC Commissioner Ajit Pai said the public comment portal may get an upgrade, including CAPTCHAs, in a letter to U.S. Sen. Jeff Merkley of Oregon.)
To fight against identity theft, DiResta says the names illegally used to submit comments often come from hacked user lists, which can be found online. Submitted comments could be checked against these lists, helping an agency flag and investigate misconduct.
One company working to improve the online comment system is SmartComment, headquartered in Los Angeles. Keith Guille, a public information officer at the Wyoming Department of Environmental Quality in Cheyenne, says his department uses SmartComment so the public can read about a rule change and make a comment.
Previously, Guille says, the department would take out a legal ad in newspapers or post on an email discussion list or website. Now, SmartComment creates a centralized database of comments received online, through phone calls or letters, for example. “It really helps the staff to organize things much easier,” Guille says. Since adopting the platform, he has seen an increase of comments, but that has not come with an onslaught of spam, he says.
Acknowledging bots play a role in every form of electronic communication, Tim Mullen, co-founder of SmartComment, says agencies have to adopt technology that cuts through the manufactured noise. He says his product uses various features to fight against bots and spam, but he would not be more specific for security reasons.
Rosenworcel at the FCC thinks that “our openness is being exploited,” and that there are concrete steps to be made to protect democratic institutions. “Nobody said that digital age democracy was going to be easy,” she says.
This article was published in the August 2018 ABA Journal magazine with the title "No Comment: The FCC received millions of comments during the net neutrality repeal debate from bots exploiting stolen or misused identities—what is the agency going to do about it?."